diff --git a/-9A0T4oBgHgl3EQfPP_v/vector_store/index.faiss b/-9A0T4oBgHgl3EQfPP_v/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..bf09ca03a4d375fcca8f60ddd627f3d554f978d5 --- /dev/null +++ b/-9A0T4oBgHgl3EQfPP_v/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e305e359d81679c75fcb2d2df70990a3492235ccdc37b086d64fae53c0263d45 +size 3604525 diff --git a/-NE5T4oBgHgl3EQfRg5P/content/2301.05521v1.pdf b/-NE5T4oBgHgl3EQfRg5P/content/2301.05521v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2e380b205e1cc54ad8c27b3a7bb50a06dcf3b989 --- /dev/null +++ b/-NE5T4oBgHgl3EQfRg5P/content/2301.05521v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:08d21cc31de748753afdcc2d37b2ca9b3df2dd167d4fc0332295bd171df33756 +size 753206 diff --git a/.gitattributes b/.gitattributes index 4568d4bead58264e1c43182257cfd802fab44763..7abd67908dc06140f2263063b838fb37c1c1e194 100644 --- a/.gitattributes +++ b/.gitattributes @@ -7130,3 +7130,37 @@ edE1T4oBgHgl3EQfLgP4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -tex ctE5T4oBgHgl3EQfEw7I/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text b9FIT4oBgHgl3EQfmyvP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text FdAyT4oBgHgl3EQfe_hY/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +8tFRT4oBgHgl3EQfpzcC/content/2301.13614v1.pdf filter=lfs diff=lfs merge=lfs -text +8tFRT4oBgHgl3EQfpzcC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +MdE1T4oBgHgl3EQftQUl/content/2301.03374v1.pdf filter=lfs diff=lfs merge=lfs -text +VNFKT4oBgHgl3EQfmS5u/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +4tE4T4oBgHgl3EQf1A0X/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +1dAzT4oBgHgl3EQfDfo-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +edE1T4oBgHgl3EQfLgP4/content/2301.02979v1.pdf filter=lfs diff=lfs merge=lfs -text +2NE4T4oBgHgl3EQfagwY/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +2NE4T4oBgHgl3EQfagwY/content/2301.05064v1.pdf filter=lfs diff=lfs merge=lfs -text +aNE0T4oBgHgl3EQf4QKS/content/2301.02736v1.pdf filter=lfs diff=lfs merge=lfs -text +dNE5T4oBgHgl3EQfgA8L/content/2301.05630v1.pdf filter=lfs diff=lfs merge=lfs -text +dNE5T4oBgHgl3EQfgA8L/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +VNFKT4oBgHgl3EQfmS5u/content/2301.11857v1.pdf filter=lfs diff=lfs merge=lfs -text +YtE1T4oBgHgl3EQfcQR-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +JdE2T4oBgHgl3EQfUgew/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +OtAyT4oBgHgl3EQfUfen/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +KdFOT4oBgHgl3EQfzDRI/content/2301.12930v1.pdf filter=lfs diff=lfs merge=lfs -text +ldAyT4oBgHgl3EQf_voI/content/2301.00912v1.pdf filter=lfs diff=lfs merge=lfs -text +-9A0T4oBgHgl3EQfPP_v/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +rdFJT4oBgHgl3EQfbiw2/content/2301.11539v1.pdf filter=lfs diff=lfs merge=lfs -text +rdFJT4oBgHgl3EQfbiw2/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +29FST4oBgHgl3EQfYTgs/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +79A0T4oBgHgl3EQfOf-b/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +RtAyT4oBgHgl3EQf7_pZ/content/2301.00848v1.pdf filter=lfs diff=lfs merge=lfs -text +mdFPT4oBgHgl3EQf4TUa/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +etE2T4oBgHgl3EQfxghO/content/2301.04111v1.pdf filter=lfs diff=lfs merge=lfs -text +ldAyT4oBgHgl3EQf_voI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +hNAyT4oBgHgl3EQf-voB/content/2301.00895v1.pdf filter=lfs diff=lfs merge=lfs -text +ntE5T4oBgHgl3EQfIA7J/content/2301.05446v1.pdf filter=lfs diff=lfs merge=lfs -text +-NE5T4oBgHgl3EQfRg5P/content/2301.05521v1.pdf filter=lfs diff=lfs merge=lfs -text +JNFIT4oBgHgl3EQfZCuh/content/2301.11251v1.pdf filter=lfs diff=lfs merge=lfs -text +PdE0T4oBgHgl3EQfkAEs/content/2301.02466v1.pdf filter=lfs diff=lfs merge=lfs -text +g9AzT4oBgHgl3EQf4P7n/content/2301.01843v1.pdf filter=lfs diff=lfs merge=lfs -text +q9E3T4oBgHgl3EQf8gt0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text diff --git a/0NAyT4oBgHgl3EQf0_na/content/tmp_files/2301.00729v1.pdf.txt b/0NAyT4oBgHgl3EQf0_na/content/tmp_files/2301.00729v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..57f60504c204bbdd263bad82e5eb4dbcd8d62998 --- /dev/null +++ b/0NAyT4oBgHgl3EQf0_na/content/tmp_files/2301.00729v1.pdf.txt @@ -0,0 +1,1646 @@ +A Closed-Form EVSI Expression for a Multinomial +Data-Generating Process +Adam Fleischhacker∗, Pak-Wing Fok†, Mokshay Madiman‡, Nan Wu§ +January 3, 2023 +Abstract +This paper derives analytic expressions for the expected value of +sample information (EVSI), the expected value of distribution informa- +tion (EVDI), and the optimal sample size when data consists of inde- +pendent draws from a bounded sequence of integers. Due to challenges +of creating tractable EVSI expressions, most existing work valuing data +does so in one of three ways: 1) analytically through closed-form ex- +pressions on the upper bound of the value of data, 2) calculating the +expected value of data using numerical comparisons of decisions made +using simulated data to optimal decisions where the underlying data +distribution is known, or 3) using variance reduction as proxy for the +uncertainty reduction that accompanies more data. For the very flex- +ible case of modelling integer-valued observations using a multinomial +data-generating process with Dirichlet prior, this paper develops ex- +pressions that 1) generalize existing beta-Binomial computations, 2) +do not require prior knowledge of some underlying “true” distribution, +and 3) can be computed prior to the collection of any sample data. +1 +Introduction +The seminal work of [34] introduced preposterior analysis, a Bayesian recipe +for estimating the value of information (VOI) prior to knowing the informa- +∗Department of Business Administration, University of Delaware, Newark, DE 19716, +email: ajf@udel.edu +†Department of Mathematical Sciences, University of Delaware, Newark, DE 19716, +email: pakwing@udel.edu +‡Department of Mathematical Sciences, University of Delaware, Newark, DE 19716, +email: madiman@udel.edu +§Institute for Financial Services Analytics, University of Delaware, Newark, DE 19716, +email: nanw@udel.edu +1 +arXiv:2301.00729v1 [stat.ME] 2 Dec 2022 + +tion’s content. The expected value of sample information (EVSI), a particu- +larly valuable VOI computation, values the information contained in sample +observations prior to their collection. [34] include many closed-form and oft- +used expressions for calculating EVSI under the assumption of quadratic +loss. One such expression is for a Bernoulli data-generating process with +beta prior distribution (a.k.a. +a beta-Binomial model); each observation +being either zero or one [34, Table 6.2, p. 191]. In this paper, we gener- +alize the beta-binomial EVSI expression beyond binary-valued observations +to the case where each data point is drawn from a bounded sequence of +integers. These results expand the availability of tractable VOI expressions +to a useful scenario where previously value could only be approximated or +bounded when a closed-form expression was needed. +Depending on a modeler’s choices of actions, states of uncertainty, loss +(or utility) functions, and probability models, tractable calculations of VOI +may exist, but intractable formulations, especially for EVSI, are much more +common. In fact, reputed statistician Dennis Lindley has remarked that +the question of sample size “is embarrassingly difficult to answer” due to +difficulties calculating EVSI [26]. More generally, [14] shows that simply +characterizing the relationship between information and value is challeng- +ing; [14]’s work dispels the idea that information value will reliably exhibit +monotonic relationships with information value determinants such as action +flexibility, risk aversion, or a decision maker’s wealth. +While for some EVSI and VOI problems, closed-form solutions are at- +tainable [34, 5, 4], value of information solutions are often difficult to for- +mulate. +Hence, many papers are known for their ability to characterize +aspects of VOI expressions such as the distributional properties of the ex- +pected value of perfect information (EVPI) [28], the impact of an exogenous +variable on EVPI [20], and the additivity of information value when multi- +ple sources of uncertainty exist [21]. EVSI calculations, in particular, often +result in intractable expressions of multiple integrals where only numerical +methods can yield results [25]. Even then, many numerical methods still +require further simplifying assumptions (see, e.g., [36]). While it is possi- +ble to approximate VOI computations via normal approximations (see, e.g., +[30, 19]) or using a computationally intense simulation-based methodology +(see, e.g., [10, 37]), closed-form expressions yield instantaneous and accurate +value computations with more interpretable insights regarding the effects of +prior beliefs and sample sizes. +In this paper, we provide a new EVSI calculation for a flexible (i.e. multi- +nomial) data-generating process that adheres to three desiderata outlined +in [34, p.44]: +2 + +Tractable +EVSI is easily calculated using a closed- +form expression. +Rich +A decision maker’s prior beliefs and in- +formation are readily incorporated as part +of the calculation. +Interpretable +The expression for EVSI provides insight +as to the effects of prior beliefs and sam- +ple size choices on the expected value of +a sample. +Generating Process +Conjugate Prior +Source +Bernoulli(θ) +(θ) ∼Beta +[34] +[32] +Poisson(λ) +λ ∼gamma +[34] +Normal(µ, σ) +µ ∼ Normal, σ known +[34] +µ known, σ2 ∼ inv. Gamma +[34] +σ2 ∼ inv. Gamma, µ|σ2 ∼ Normal +[34] +Multinomial(t)1 +t ∼ Dirichlet +This Paper +Table 1: Position of this paper in comparison to other tractable EVSI cal- +culations. +Shown in Table 1, our point of departure is generalizing the EVSI cal- +culation for a Bernoulli data-generating process with beta prior (a.k.a. a +beta-binomial model) to the case of a multinomial data generating process +with Dirichlet prior. Rich treatment and illustrative examples surround- +ing EVSI calculations for the beta-binomial conjugacy can be found in [15]. +Additionally, [32] provide explicit closed-form value of information compu- +tations for the beta-binomial case and is very close in spirit to this work, +but does not investigate the Dirichlet-multinomial setting. In relation to +the multinomial sampling process we explore in this paper, existing work +has focused on non-utility based approaches where data is valued based on +its ability to bound a parameter of interest within a certain level of preci- +sion [1, 6]. Our approach, in contrast, extends the utility-based valuation +of sampling to a multinomial sampling environment to yield closed-form +expressions for both EVSI and the expected value of distribution informa- +tion (EVDI). Publication of analytically tractable expressions will be able +3 + +to supplant the still-present usage of Monte Carlo simulation in multinomial +settings (see, e.g., [38]). +When closed-form EVSI expressions are unavailable, quantification of +value created through uncertainty reduction typically relies on one of three +techniques: 1) closed-form expressions on the upper bound of the value of +data, 2) simulated comparisons between valuing decisions made by an or- +acle who knows the underlying data distribution to decisions made by a +less-informed decision maker, or 3) using variance-reduction as a proxy for +how data reduces underlying uncertainty in the data-generating process. For +examples of the first type, [27] bound EVPI for a risk-averse decision maker +and [40] place an upper bound on the value of knowing the true distribu- +tion when one already knows the mean and variance of that distribution. +Examples of the second type often compare a Bayesian updating procedure +to a known optimal solution [8, 29, 7, 35]. Lastly, computing the value of +variance reduction independent of the specific quantity of data is also seen +within the literature [11, 22]. +2 +Problem Setup +Despite substantial efforts, notation for preposterior analysis has not been +standardized and is often a matter of personal taste [33]. To aid the reader +with this paper’s notation surrounding its random variables and their real- +izations, we present the following summary breaking the notation into three +levels of analysis: +1. +Data/Sample. +Data is an integer-valued random variable with support +{0, 1, . . . , M}. Sample is a random vector referring to either a sequence of n data +observations or a vector of counts representing the number of occurrences of each +potential data value recorded in n observations. +D: +A random variable representing a single data observation. +d: +A single realization of D with integer valued support: d ∈ {0, 1, . . . , M}. +X ≡ (X1, . . . , Xn): +A random vector of n observations of D. +x ≡ (x1, . . . , xn): +A realization of data vector X. +Dn: +The support of X when n realizations are observed. +nk: +the number of times that k ∈ {0, 1, . . . , M} appears in x. +(n0, n1, . . . , nM): +A vector of counts of occurrences for each potential data value. +2. Data/Sampling Distributions. Data and sampling distributions are iden- +tical terms referring to the probability distribution governing the data-generating +process. Data distribution refers to generating individual data points and sampling +1With support interpreted as a sequence of integer values. +4 + +distribution preferred when talking about a sequence of observations. +T ≡ (T0, T1, . . . , TM): +A random vector representing a data distribution. +Random elements Tk are data distribution parameters +representing the probability of data realization being k. +t ≡ (t0, t1, . . . , tM): +A realization of random vector T such that +tk = p(D = k) for k ∈ {0, 1, . . . , M}. +t∗: +The “true” data distribution or sampling distribution; +only knowable by an oracle. +T : +The space or set of all possible data distributions. T, t, t∗ ∈ T . +3. Prior/Posterior Distributions. Continuous multivariate probability distri- +butions with domain of all possible data distributions. +π: +A prior from which data distributions are generated. +πX: +A posterior that updates π in light of data X. +2.1 +Modelling Data and Loss +Consider a data-generating process that generates independent and identi- +cally distributed samples from a bounded sequence of M + 1 integers. For +notational simplicity, we rescale the sequence to be [M] ≡ {0, 1, . . . , M}. For +practical motivation, the data could represent product demand and the goal +is to make accurate predictions for inventory control [39]. For the specific +case of demand uncertainty, we note that there are asymmetric and other +loss functions that would be preferred to the quadratic loss function used +here, but closed-form expressions are not forthcoming for those cases. +The data-generating process is governed by an unknown data distribu- +tion, t, with discrete-finite support [M]. Thus the statistical model for the +data-generating process is parameterized by the standard M-dimensional +simplex of probabilities +T = {t = (t0, . . . , tM) ∈ RM+1 ++ +: t0 + . . . + tM = 1}; +this infinite (but finite-dimensional) parameter space describes how we are +labeling the potential data distributions. If the sample size of the data is n, +we have n values x1, . . . , xn ∈ [M] being generated by the data-generating +process. +For a given t ∈ T , the associated data-generating process p(n) +t +assigns probability +p(n) +t +(x1, . . . , xn) = +n +� +i=1 +txi +(1) +5 + +to this particular sequence of data values. In particular, if the sample size +is 1, the data-generating process is simply given by +pt(d) ≡ p(1) +t (d) = td, +d ∈ [M]. +It is clear that the number of occurrences of particular data values in the +sample is a sufficient statistic for the model described, and that the sam- +pling distribution for this sufficient statistic is just the multinomial model. +Specifically, if nd = |{1 ≤ i ≤ n : xi = d}|, then (n0, . . . , nM) is a sufficient +statistic, and we have, with obvious abuse of notation, +pt(n0, . . . , nM) = +� +n +n0 · · · nM +� M +� +d=0 +tnd +d . +(2) +Note that n0+. . .+nM = n by definition; so we do not write the superscript +(n) when using the sufficient statistic to represent the data. +When making predictions for future data, ideally the action (or predic- +tion) is close to the actual data realization. For tractability, we consider a +quadratic terminal opportunity loss function for a single prediction to be of +the following form: +ℓ(d, a) = k(d − a)2 +(3) +where k > 0 is a known constant, a is the action/prediction, and d ∈ [M] is +the actual data realization. +To briefly make the above notation more concrete, let’s imagine fore- +casting product demand for a product that will sell between 0 and 5 units +(M = 5). Each period’s i.i.d demand, d ∈ {0, 1, . . . , 5}, has an associated +probability of occurrence, pt(0), pt(1), . . . , pt(5), which is represented more +compactly as t0, t1, . . . , t5. The effectiveness of any action will be measured +using quadratic loss scaled by a factor k such that if k = 5, d = 4, and +a = 1, then ℓ(4, 1) = 45. The decision maker is contemplating the value of +n = 3 observations where generated data, (x1, x2, x3), might be something +like (0, 5, 0) and the associated sufficient statistic of counts, (n0, . . . , n5), +would be (2, 0, 0, 0, 0, 1). Note that t ≡ t0, t1, . . . , t5 parameterizes both the +data-generating process of eq. (1) yielding (x1, x2, x3) and the equivalent +sampling process of eq. (2) yielding (n0, . . . , n5). As a result, we refer to t +as both data distribution and sampling distribution depending on context. +6 + +2.2 +Preposterior Analysis +For any data distribution t, define the expectation of loss as: +R(t, a) = ED|T=t [ℓ(D, a)] = +M +� +d=0 +pt(d)ℓ(d, a). +(4) +where R(t, a) is known as the Bayes risk. Since a decision maker (DM) does +not know the underlying “true” t∗ ∈ T data distribution, the minimum +Bayes risk, mina R(t∗, a), is likely unachievable. +For a DM, risk is evaluated on an average basis based on the probability +distribution the DM places over the simplex T . Without any sample obser- +vations, this distribution is the prior π over all possible data distributions +in T . The average risk of taking action a using prior π is +¯R(π, a) = ET [R(T, a)] , +(5) +with T ∼ π. The Bayes action for π is +a∗(π) = arg min +a∈A +¯R(π, a). +(6) +The Bayes risk for π is +¯R(π, a∗(π)) = min +a∈A +¯R(π, a). +(7) +Access to a sample X ≡ (X1, . . . , Xn) results in a different decision with +different risk. +With sample observations, the DM applies Bayes’ rule to +update π to πX (the posterior) and calculates the associated optimal Bayes +action a∗(πX). Since X is unknown prior to actually collecting the sample, +the Bayes risk for πX is itself a random variable. Hence, we evaluate the +DM’s prior expectation of loss with sample information over all possible +samples X, +EX +� ¯R(πX, a∗(πX)) +� += ET EX|T [R(T, a∗(πX))] , +(8) +with T ∼ π and the right-hand side expression derived by substituting πX +for π in eq. (5) and applying the law of total expectation. +Thus, the expected value of a sample of information (EVSI), Vn(π), is the +difference between the prior expectations of loss with and without sample +X under prior π: +Vn(π) += +¯R(π, a∗(π)) − EX +� ¯R(πX, a∗(πX)) +� +(9) += +ET [R(T, a∗(π))] − ET EX|T [R(T, a∗(πX))] +(10) +7 + +where T ∼ π and eq. (10) follows from eqs. (5) and (8). Proposition 2.1 for- +malizes our intuition that this expected value of sample information should +be non-negative. +Proposition 2.1. Suppose data distribution T ≡ (T0, . . . , TM) is drawn +from a given prior π. Assume further that a DM is given n samples X ≡ +(X1, . . . , Xn) and updates his/her prior to the posterior πX. Then, under +quadratic loss, the expected value of these n samples is non-negative, i.e. +Vn(π) = ET [R(T, a∗(π))] − ET EX|T [R(T, a∗(πX))] ≥ 0. +(11) +Proof. See Appendix. +□ +Because the ordering within the sample X does not matter, the inner ex- +pectation in (11) is performed over (n0, n1, . . . , nM) ∼ Multinomial(t) con- +ditioned on T = t where nj is the number of times that j ∈ [M] appears in +the sample, and the outer expectation is performed over T ∼ π. +3 +Tractable Valuation of Sample Information +To arrive at a tractable valuation for (10), we leverage the Dirichlet distri- +bution as a prior for three reasons: 1) it is a conjugate prior to categori- +cal/multinomial outcomes, 2) its support is the M-dimensional simplex T , +and 3) it has flexibility to model many types of prior information for the de- +cision maker. With the Dirichlet assumption, the main result of this paper, +Theorem 3.1, can be presented: +Theorem 3.1. For data distribution T with support [M] and prior π = +Dirichlet(α0, α1, . . . , αM), the expected reduction in quadratic loss after ob- +serving n data samples, also called the expected value of sample information +(EVSI), is given by: +Vn(π) = +kn(c2 − c2 +1) +(n + α)(1 + α). +(12) +where α = �M +d=0 αd is the precision/concentration parameter of the Dirich- +let distribution (see [16]) and c1 = +1 +α +�M +d=0 dαd and c2 = +1 +α +�M +d=0 d2αd +are the first and second moments of the data under the marginal likelihood +(α1, α2, . . . , αM)/α. +Proof. See Appendix. +□ +8 + +Theorem 3.1 gives the expected value of observing an n-trial multinomial +sample with Dirichlet prior where support of the underlying data-generating +process is the bounded sequence of integers [M] = {0, 1, . . . , M}. This is a +natural generalization of valuing an n-trial binomial sample with beta prior +where support of the underlying data-generating process is restricted such +that [M] = {0, 1}. With just a slight change of notation, we know from [32] +that EVSI for the beta-binomial case in closed-form is: +kn +n + α0 + α1 +· +α0α1 +(α0 + α1)2(α0 + α1 + 1) +(13) +where π ∼ Beta(α0, α1). Replacing this prior with the equivalent Dirichlet +parameterization of π ∼ Dirichlet(α0, α1) and using Theorem 3.1 yields an +identical result: +Vn(π) = +kn(c2 − c2 +1) +(n + α)(1 + α) += +kn +(n + α0 + α1) · +α1 +α0+α1 − +α2 +1 +(α0+α1)2 +(α0 + α1 + 1) += +kn +(n + α0 + α1) · +α0α1 +(α0 + α1)2(α0 + α1 + 1) +(14) +As a direct consequence of Theorem 3.1, when n → ∞, we have an +expression for the expected value of distribution information (EVDI), as an +infinite sample gives the data distribution exactly: +V∞(π) = lim +n→∞ Vn(π) = k(c2 − c2 +1) +1 + α +. +(15) +Lastly, we can express the efficiency η of the sample information as a function +of the number of sample points using the ratio of (12) to (15) as: +η = +n +n + α. +(16) +Hence, the percentage of value obtained through sampling is given by the +ratio of the number of data points n to the sum of the n data points +and the concentration parameter α of a Dirichlet distribution. This sam- +pling efficiency calculation directly simplifies to the known formula of the +beta-binomial case from [34](in our notation): +η = n/(α0 + α1) where +π ∼ Beta(α0, α1). +Again, we make the notation more concrete, by revisiting our forecasting +product demand example from the end of §2.1. Recall, we have a product +9 + +that will sell between 0 and 5 units (M = 5) and loss is scaled by k = +5. The decision maker is contemplating the value of n = 3 observations. +Introducing a zero-inflated prior π ∼ Dirichlet( 10 +6 , 1 +6, 1 +6, 1 +6, 1 +6, 1 +6) means α = +15 +6 , c1 = +6 +15 · (0 · 10 +6 + 1 · 1 +6 + 2 · 1 +6 + 3 · 1 +6 + 4 · 1 +6 + 5 · 1 +6) = 1, c2 = +6 +15 · +(0 · 10 +6 + 1 · 1 +6 + 4 · 1 +6 + 9 · 1 +6 + 16 · 1 +6 + 25 · 1 +6) = 11 +3 . Plugging into eq. (12) +yields EVSI V3(π) = 160 +77 ≈ 2.08. and EVDI V∞(π) = 80 +21 ≈ 3.81. From +eq. (16) we get η = +6 +11 ≈ 54.5%, so the learning from n = 3 samples is +expected to provide more than half of the maximum possible reduction in +loss. Following from eqs. (26) - (31), a∗(π) = 1 and prior expected loss +¯R(π, a∗(π)) = 5 · (−12 · 10 +15 + 02 · 1 +15 + 12 · 1 +15 + 22 · 1 +15 + 32 · 1 +15 + 42 · 1 +15) = +40 +3 ≈ 13.33. And thus, we can also get the prior expectation of posterior loss +EX +� ¯R(πX, a∗(πX)) +� += ¯R(π, a∗(π)) − V3(π) = 40 +3 − 160 +77 ≈ 11.26. +4 +Notes on Richness and Interpretability of Mod- +elings Assumptions +In the previous section, we showed one of the three EVSI desiderata, tractabil- +ity, can be achieved for a multinomial data-generating process with Dirichlet +prior. The multinomial distribution is flexible enough to model any discrete +(finite) data distribution. Its prior, the Dirichlet distribution, is also flexible +in its ability to model a wide range of distributions over a simplex. Yet, some +sacrifice of richness in modeling prior beliefs is made in the name of tractabil- +ity. Most notably, a more rich/flexible alternative prior over a simplex is the +logistic-normal distribution [3, see discussion in]. The most glaring weak- +ness of the Dirichlet distribution is in modeling prior beliefs where there is +some type of correlation structure between data observations. For example, +observing a high data value, say 100, would make one think values of 101 +and 99 are also more likely to occur than data values further away. How- +ever, the Dirichlet distribution, as a prior distribution to multinomial data, +is unable to capture this structure. Notably, the distribution-free underpin- +nings of the Kaplan-Meier estimator also ignore this potential correlation +among data observations, yet shows favorable results in a similar repeated +newsvendor setting [17] . +The richness of the Dirichlet prior is best seen through the lens of its intu- +itive reparameterization [16]. Let the concentration parameter α = �M +i=0 αi +and let the vector m = +� α0 +α , α1 +α , . . . , αM +α +� +represent the mean where the +expected mean of the data observations is given as c1 = +1 +α +�M +i=0 iαi = +�M +i=0 imi. When α is small, say α ≤ M, the prior distribution over the +simplex can differ greatly from m and reflect a decision maker’s uncertainty +10 + +0.00 +0.25 +0.50 +0.75 +α0 +α5 +α10 +α15 +α20 +Dirichlet Parameter +Parameter Value +Dirichlet Shape Parameter for M = 20 +0.00 +0.05 +0.10 +0.15 +0.20 +p0 +p5 +p10 +p15 +p20 +Multinomial Parameter +Parameter Value +0.00 +0.05 +0.10 +0.15 +0.20 +p0 +p5 +p10 +p15 +p20 +Multinomial Parameter +Parameter Value +Sample Realizations of Multinomial Parameters +EVDI +0.0 +0.5 +1.0 +1.5 +2.0 +0 +10 +20 +30 +40 +50 +# of samples (n) +value +EVSI +EVSI as Function of n for M = 20 +Concentration Parameter = 10 +0 +1 +2 +3 +4 +α0 +α5 +α10 +α15 +α20 +Dirichlet Parameter +Parameter Value +Dirichlet Shape Parameter for M = 20 +0.00 +0.05 +0.10 +p0 +p5 +p10 +p15 +p20 +Multinomial Parameter +Parameter Value +0.000 +0.025 +0.050 +0.075 +0.100 +0.125 +p0 +p5 +p10 +p15 +p20 +Multinomial Parameter +Parameter Value +Sample Realizations of Multinomial Parameters +EVDI +0.0 +0.1 +0.2 +0.3 +0.4 +0 +10 +20 +30 +40 +50 +# of samples (n) +value +EVSI +EVSI as Function of n for M = 20 +Concentration Parameter = 50 +Figure 1: +Graphical depiction of the Dirichlet prior parameters, poten- +tial realizations for that prior (i.e. the multinomial parameters), and the +EVSI/EVDI calculations as a function of n samples for the given prior. Top +row for concentration parameter α = 10 and bottom row for concentration +parameter α = 50 +11 + +around their expectation. As α is made larger, the prior distribution will +concentrate probability density near m and reflect greater confidence. We +present a graphical overview of this in Figure 1 for two different concentra- +tion parameters. As seen, when α is smaller (top row of Figure 1) the real- +ized multinomial parameters (middle-top plot) can be further away from the +mean m (which is proportional to the parameters in the top-left plot). As α +increases (bottom-row) the prior distribution becomes much more informa- +tive and multinomial parameters will most likely mirror the prior Dirichlet +parameters. +In terms of interpretability, Theorem 3.1 formalizes our intuition about +what drives the value of data. Specifically, data is valuable when 1) the +sample contains a lot of data (high n), 2) the expected variance of the +data distribution is large (high c2 − c2 +1), and 3) there is a lot of uncertainty +regarding the true data distribution (α is small). Additionally, the calcu- +lation for EVDI (eq. 15) gives an interpretable upper bound on the value +of data where high variance pushes to make samples more valuable and a +high concentration parameter makes samples less valuable. Lastly, the equa- +tion for efficiency (16) adds further insight by stating how quickly the upper +bound on the value of data is approached; basically, the smaller the Dirichlet +concentration parameter, the more quickly EVDI is approached with each +subsequent data point. +5 +Illustrative Examples +In this section, we demonstrate how the tractable formulation for EVSI, +equation (12), can serve as a building block inside of other research initia- +tives. The first example explores sample size optimization and the second +example shows how a tractable EVSI calculation can lead to a tractable de- +cision policy in a two-stage production planning problem. In the third/last +example, the EVSI formula provides a foundation from which to benchmark +heuristic updating procedures that seek to estimate an underlying unknown +data distribution. +5.1 +The Choice of Sample Size +We now explore a decision maker’s objective to choose the number of sam- +ple points to collect in such a way as to minimize his expected loss when +assuming expected sampling cost, Cs(n), is a linear function of the number +of sampled points n: +Cs(n) = K + sn +(17) +12 + +where s is the cost of one sample and K represents the fixed costs of sam- +pling. +The loss function to be minimized, ℓs(n), combines equations (12) and +(17): +ℓs(n) = − +kn(c2 − c2 +1) +(n + α)(1 + α) + K + sn +(18) +And assuming for practical purposes that n can be treated continuously, +we get the optimal sample size: +n∗ = +� +α +(1 + α) +k +s (c2 − c2 +1) − α +(19) +for cases where n∗ is positively valued and the fixed costs of sampling K +can be recovered, i.e. Vn(π) > Cs(n∗). In all other cases, n∗ = 0. Equation +(19) has a nice economic interpretation where the three terms represent the +strength of the prior, the ratio between the scaling of the quadratic loss +costs and the unit sampling costs, and the predicted variance of the data +distribution. +5.2 +Two-Stage Production Planning +The example shown here is a simple two-stage production planning problem +(see, e.g., [9]) where the decision maker seeks to optimally schedule the 2nd +production run. +Assume J periods make up a selling season. Each period, j ∈ J faces in- +dependent and identical categorical demand with Dirichlet prior and quadratic +loss (i.e. a repeated newsvendor setting with quadratic loss) with identical +shipments scheduled for each period. A decision maker can choose either 1) +to schedule the delivery quantity for each period in the entire selling season +or, 2) at cost K can specify a period j∗ after which the scheduled delivery +quantity can be changed. Assuming this change date will be contractually +set in advance of the selling season, find j∗ to minimize expected net costs +over the entire season J. +The net cost function for this problem is: +C(j) = +� +� +� +0, +if j = 0, +K − (J − j) +kj(c2 − c2 +1) +(j + α)(1 + α), +if j ∈ (0, J] +(20) +13 + +When j ∈ (0, J], the net cost function C(·) is strictly convex and has a +unique global minimum value. The optimal period j∗ is +j∗ = arg min +j∈{0,1,...,J} +C(j) +When min C(j) = 0 for 0 < j ≤ J, we choose j∗ = 0. +For the case when min C(j) < 0, we have +j∗ = +� +α(J + α) − α +Considering that j∗ must be a non-negative integer, summarizing differ- +ent cases we have the optimal j∗ as +j∗ = +� +� +� +� +� +0, +if min +j∈[0,J]C(j) = 0, +arg min +j∈{⌊j0⌋,⌈j0⌉} +C(j), +if min +j∈[0,J]C(j) < 0. +(21) +where j0 = +� +α(J + α) − α. +5.3 +Benchmarking Data-Driven Algorithms +An active area of research is to propose algorithms for decisions in repeated +settings where minimal assumptions about the underlying data distribution +are known. These approaches include Sample Average Approximation(SAA) +[24, 23], concave adaptive value estimation (CAVE) [12], and Second Order +Belief Maximum Entropy (SOBME) [35]. When benchmarking these algo- +rithms, it is customary to pick a handful of “true” distributions where the +algorithm competes against a known optimal solution. +With the introduction of a closed-form EVSI calculation in the context +of a Dirichlet prior, a more robust benchmarking scenario can be achieved. +Instead of picking a “true” data distribution, we pick a “true prior” from the +Dirichlet family with support matching the problem of interest. This prior +can be used to then simulate “true” data distributions (as many as we want) +by which we can estimate the reduction in squared loss as a function of n, +the number of data samples. Given this setup, a comparison of a proposed +algorithm can be made against a known optimal updating procedure. After +all, it is the updating procedure that we are seeking to validate, and the opti- +mal updating procedure to benchmark new algorithms against is, therefore, +the Bayesian one detailed in the proof of Theorem 3.1 (see appendix). +As a proof of concept, Figure 2 is an example benchmarking the well- +known sample average approximation (SAA) (see [24]) against the known op- +timal Bayesian updating procedure (BAYES) using a Dirichlet(α0, α1, . . . , αM) +14 + +GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG +Optimal Squared Loss (i.e. distribution known) +18 +20 +22 +0 +10 +20 +30 +40 +# of sample data points +expected quadratic loss +Updating +Method +G BAYES +SAA +Expected Loss as Function of n for M = 20 +Figure 2: Comparing the sample average approximation(SAA) updating +procedure to the known Bayesian (BAYES) optimal updating procedure. +prior with M = 20, α = 10, and m ∝ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 13, 11, 9, 7, 5, 3, 1} +(chosen to be slightly skewed). In this scenario, we see the value of prior in- +formation in small data settings as BAYES outperforms SAA. It also shows +how as the amount of data increases, the non-parametric SAA algorithm’s +performance improves and closely mimics that of the optimal Bayesian up- +dating procedure. +6 +Conclusion +The use of preposterior analysis in this paper provides a formal method for +valuing data prior to its collection and as such, should serve as a build- +ing block in many systems and models going forward. By expanding the +support of the underlying data-generating process from [M] = {0, 1} to +[M] = {0, 1, . . . , M}, the beta-binomial EVSI calculations are successfully +generalized to a Dirichlet-multinomial setting. Using this new EVSI com- +putation, three illustrative examples valuing data prior to its collection are +shown, there are potentially many other contexts where this tractable formu- +lation might also prove useful. Researchers in two particular areas, medical +decision making and active (machine) learning are known to be interested in +EVSI types of calculations (see, e.g., [2, 13, 18, 31]). And we look forward +to hearing of other useful deployments for this method of valuing data prior +to its collection. +15 + +A +Proof of Proposition 2.1 and Theorem 3.1 +A.1 +Proof of Proposition 2.1 +The expected value of sample information is +Vn (π) = ET [R(T, a∗(π))] − ET EX|T [R(T, a∗(πX))] . +(22) +For the first term in eq. (22), we have +ET [R(T, a∗(π))] = kET +� +ED|T +� +(D − a∗ (π))2�� += kET +� +ED|T +� +(D − E [D])2�� += kED +� +(D − E [D])2� += kVar [D] . +(23) +The second line is due to the optimal action under squared loss being the +mean (see eq. (30)). The third line of equation (23) follows from the law of +total expectation. Thus, the optimal Bayes risk without sample information +under quadratic loss (3) is the marginal variance of D scaled by a factor k. +Similarly, for the second term in eq. (22) we find +ET +� +EX|T [R (T, a∗ (πX))] +� += kET +� +EX|T +� +ED|T +� +(D − a∗ (πX))2��� += kET +� +EX|T +� +ED|T +�� +D − ED|X [D] +�2��� += kEX +� +ED|X +�� +D − ED|X [D] +�2�� += kEX +� +VarD|X [D] +� +. +(24) +The optimal Bayes risk under quadratic loss (3) if a sample of size n is to +be collected is the expected variance of the predictive posterior distribution +of D scaled by a factor k. +Combining (22),(23), and (24), we complete the proof: +Vn (π) = ET [R(T, a∗(π))] − ET +� +EX|T [R (T, a∗ (πX))] +� += kVar [D] − kEX +� +VarD|X [D] +� += k +� +Var [D] − EX +� +VarD|X [D] +�� += kVarX +� +ED|X [D] +� +≥ 0. +(25) +16 + +The last equal sign in equation (25) follows from the law of total variance. +Since k > 0 and VarX +� +ED|X [D] +� +≥ 0 for any X, we have Vn (π) ≥ 0 for any +sample size n. +□ +A.2 +Proof of Theorem 3.1 +Consider the prior distribution for the data-generating process +π = Dirichlet(α0, α1, . . . , αM). +Suppose our information consists of n samples of the data distribution. Let +nj, j ∈ [M] be the frequency of the data being j so that nj are integers such +that �M +j=0 nj = n. Then, because the multinomial and Dirichlet distribu- +tions are conjugate, +πX += +Dirichlet(α0 + n0, α1 + n1, . . . , αM + nM). +Because π and πX both belong to the same class of distributions, we derive +closed-form valuations for the information X. The corresponding marginal +likelihoods for π and πX are +qπ(d) += +αd +α , +qπX(d) += +αd + nd +α + n , +where α = �M +i=0 αi. If the information happens to occur in such a way +that nj ∝ αj for each j, then the updated marginal likelihood is unchanged: +qd(π) = qd(πX), d ∈ [M]. +For convenience, define the quantities +Z += +1 +n +M +� +d=0 +dnd, +c1 += +1 +α +M +� +d=0 +dαd, +c2 += +1 +α +M +� +d=0 +d2αd, +where Z represents the average frequency of the sample, c1 the prior expec- +tation for a sample value, and c2 the prior second moment for the sample +value. +17 + +Given the loss function in (3), the Bayes risk and action without sample +information can be explicitly calculated +¯R(π, a) += +ET∼π[R(T, a)], +(26) += +ET∼π +� M +� +d=0 +pT (d)ℓ(d, a) +� +, +(27) += +M +� +d=0 +ℓ(d, a)ET∼π[pT (d)], +(28) += +M +� +d=0 +ℓ(d, a)qπ(d), +(29) +where {qπ(0), qπ(1), . . . , qπ(M)} is the marginal likelihood. The Bayes action +minimizes eq. (29): +∂ ¯R(π, a) +∂a += +−2k +M +� +d=0 +(d − a)qπ(d) = −2k +� M +� +d=0 +dqπ(d) − a +M +� +d=0 +qπ(d) +� += 0, +⇒ a∗(π) += +M +� +d=0 +dqπ(d), += +Eqπ[D], +(30) += +c1. +(31) +the mean data outcome under the prior marginal likelihood. +The corre- +sponding Bayes Risk is +¯R(π, a∗(π)) += +k +M +� +d=0 +(d − a∗(π))2qπ(d), += +kVarqπ[D], += +k(c2 − c2 +1). +18 + +Similarly, with sample information we have +∂ ¯R(πX, a) +∂a += +−2k +M +� +d=0 +(d − a)qπX(d), += +−2k +� M +� +d=0 +dqπX(d) − a +M +� +d=0 +qπX(d) +� += 0, +⇒ a∗(πX) += +M +� +d=0 +dqπX(d), += +EqπX [D], += +αc1 + nZ +α + n +, +(32) +which is the mean data outcome under the posterior marginal likelihood. +Now expressing EVSI as +Vn(π) += +¯R(π, a∗(π)) − ET EX|T R(T, a∗(πX)), +(33) +note the inner expectation is taken over the data frequency, which follows a +multinomial distribution: (n0, . . . , nM) ∼ Multinomial(pt(0), . . . , pt(M))), +and the outer expectation is taken over all possible distributions pt∗ ∼ +Dir(α0, . . . , αM). +The first term in (33) has already been evaluated as k(c2 − c2 +1). We now +calculate the second term. +R(t, a∗(πX)) += +k +M +� +d=0 +pt(d)(d − a∗(πX))2 += +k +M +� +d=0 +pt(d) +� +d − αc1 + nZ +α + n +�2 +⇒ EX|T=t [R(t, a∗(πX))] += +k +M +� +d=0 +pt(d) +� +d2 − +� 2nd +α + n − +2nαc1 +(α + n)2 +� +EX[Z] +−2dαc1 +α + n + +α2c2 +1 +(α + n)2 + +n2 +(α + n)2 EX[Z2] +� +.(34) +19 + +Since Z(n0, . . . , nM) = 1 +n +�M +d=0 dnd, +EX|T=t[Z] += +M +� +d=0 +dpt(d), +EX|T=t[Z2] += +VarX|T=t[Z] + +� +EX|T=t[Z] +�2 += +1 +n +M +� +d=0 +d2pt(d) + (n − 1) +n +� M +� +d=0 +dpt(d) +�2 +, +where the last line follows from the fact +VarX|T=t[Z] = VarX|T=t +� +1 +n +M +� +d=0 +dnd +� += 1 +n2 VarX|T=t +� M +� +d=0 +dnd +� += 1 +n2 +� +� +� +M +� +d=0 +d2VarX|T=t [nd] + 2 +M +� +0=i Ith then +16: +xk+1 ← xbest(MaxInfoIndex); +17: +xhist ← xhist ∪ xk+1; +18: +else +19: +xk+1 ← xk−1; // Back to previous action +20: +Remove xk−1 from xhist; +21: +end if +22: +// Execute the action and update the map +23: +Plocal ← Astar(xk, xk+1) // Plan local path by A* +24: +mk+1 ← OccupancyGridMapping(Plocal); +25: end while +Proposition 1 The time complexity of our proposed method +at each while-loop step in Algorithm 1 is: +O(NNzN 2 +c ) +� +�� +� +explicit MI evaluation ++ +O(NepochN log Nq) +� +�� +� +BKI MI inference +(12) +where Nepoch is the number of training epoch, Nz and Nc +are the numbers of beams per sensor scan, and the number +of cells that a beam intersects with the grid map at worst, +respectively. +Algorithm 2 BKI Optimization( ) +Require: Training set D = {(xi, yi)}N +i=1, current action set +to be evaluated x∗, training epoch Nepoch, factor α +1: xbest ← {}, Ibest ← {}; +2: for each epoch do +3: +// Compute the kernel function using Eq. (11) +4: +k ← KernelFunction(x∗, x); +5: +// Compute MI and uncertainty using Eq. (8) +6: +k ← Σk, y ← k · y; +7: +I∗ ← y/k, σ∗ +I ← Σ/k; +8: +ObjFunc ← αI∗ + (1 − α)σ∗ +I; +9: +xs = max(ObjFunc); +10: +if xs ∈ x then +11: +xbest ← xbest ∪ xs, Ibest ← Ibest ∪ ys +12: +else +13: +// Evaluate MI explicitly using Eq. (1) +14: +Is = CalculateMI(xs); +15: +// Add into D +16: +xbest ← xbest ∪ xs, x ← x ∪ xs; +17: +Ibest ← Ibest ∪ Is, y ← y ∪ Is; +18: +end if +19: end for +20: return xbest, Ibest +Significantly, the GP-based robot exploration in [18] and +BO-based method in [19] have the same time cost of ours in +explicit MI evaluation, but these two methods have computa- +tional complexities of O(N 3 +N 2Nq) and O(Nepoch(N 3 + +N 2Nq)) to perform the expensive GP inference for MI, +respectively. This comparative theoretic result indicates our +BKI-based exploration method outperforms the GP-based +methods in time efficiency, especially in large-scale and +cluttered places which need more samples N and Nq to +evaluate rapidly. +V. RESULTS AND DISCUSSIONS +In this section, we run numerical simulations and dataset +experiments on a desktop PC with a 3.6 GHz Intel i3- +9100F CPU and 32G RAM to verify the effectiveness of +proposed BKI-based robot exploration method. The infor- +mation threshold is Ith = 0.05 bit and the trade-off factor is +α = 0.5. We adopt a Mat´ern kernel for GP and the kernel +parameters are ℓ = 1 and ν = 3/2 for all simulations. We +also choose the parameters of ζ = 0.001 and σ = 0.01 for +BKI method. The robot poses are assumed to be known and +the robot’s candidate actions are sampled uniformly in the +FOV of range sensors. We conduct 20 Monte Carlo trials for +all maps. +We use greedy-based optimization (named “NBO” in +simulations), batch GP with only 1 epoch for optimization +(“bacth GP”) [18], and GP-based BO with multiple epochs +(“GP-BO”) [19] to compare with our methods, one named +“bacth BKI” with only 1 optimization epoch and another +one named “BKI-BO” with multiple epochs. Meanwhile, to +validate the time efficiencies, we apply 2 cases of N = 30 +and N = 60 samples for each method, where GP-BO 30 + +Informative trajectory in occupancy map +20 +40 +60 +80 +100 +120 +X(grid) +10 +20 +30 +40 +50 +60 +70 +Y(grid) +0.1 +0.2 +0.3 +0.4 +0.5 +0.6 +0.7 +(a) Informative trajectory +MI surface +20 +40 +60 +80 +100 +120 +X(grid) +10 +20 +30 +40 +50 +60 +70 +Y(grid) +0.2 +0.3 +0.4 +0.5 +0.6 +(b) MI surface +Fig. 2. +An example of BKI-based robot exploration in an unknown +structured environment. Yellow square: start point; yellow star: end point; +red line: robot direction at each action. +and BKI-BO 30 use Nepoch = 15 iterations, GP-BO 60 and +BKI-BO 60 use 30 epochs in BKI optimization. We also set +Nq = 8N in all simulations. +A. Synthetic Environments Results +To simulate the indoor and field scenes, we generate 2 +24 m × 14 m synthetic maps, one structured maze map +surrounded by several walls (shown in Fig. 2, Nloop = 50), +and one unstructured map consisting of circles and ellipses +(shown in Fig. 1, Nloop = 150). The map resolutions are both +0.2 m. The simulated range sensor has a FOV of ±1.5 rad +with a resolution of 0.05 rad, and a maximum sensing range +of 6 m. The robot is initially at [1.2 m, 1.2 m] with 0 rad +heading and trying to explore the prior unknown map. The +representative resulting paths maximizing the information +objective function are in Fig. 1 and Fig. 2. +The qualitative results of structured and unstructured maps +are shown in Fig. 3 and Fig. 4, respectively. To compare the +exploration performance using different methods intuitively, +we present the evolution of map entropy and coverage rate of +each method in the figures, where the solid and dashed lines +depict the means of Monte Carlo trials for each method, and +the shaded regions represent the standard deviations. +Fig. 3 shows the BKI and GP methods have similar +performance to the NBO methods since this structured scene +is relatively small and simple, especially in the beginning +stage where there is only one corridor to move forward. +Differently, Fig. 4 indicates that the NBO methods spend +more time (about 50∼70 steps) to converge and end the +exploration, while BKI and GP methods complete the ex- +ploration with comparable entropy reduction and coverage +rates to NBOs. +Moreover, as in Fig. 5, we use the explicitly evaluated MI +as the ground truth and compute the MI prediction errors +using BKI-BO and GP-BO methods with small training +samples in a randomly selected step, which implies the +BKI-based approach can resemble the GP-based one in MI +inference accuracy when facing challenging cases. +In short, these results validate that our BKI methods have +competitive properties with GP-based exploration ones in the +typical structured and unstructured scenes. +B. Dataset Results +To test our method in a more complex environment, we +choose the Seattle map [32] containing narrow long corridors +0 +10 +20 +30 +40 +50 +60 +Steps +2000 +2200 +2400 +2600 +2800 +3000 +Map entropy (bits) +NBO 30 +NBO 60 +GP-BO 30 +GP-BO 60 +BKI-BO 30 +BKI-BO 60 +(a) Map entropy +0 +10 +20 +30 +40 +50 +60 +Steps +2000 +2200 +2400 +2600 +2800 +3000 +Map entropy (bits) +NBO 30 +NBO 60 +batch GP 30 +batch GP 60 +batch BKI 30 +batch BKI 60 +(b) Map entropy +0 +10 +20 +30 +40 +50 +60 +Steps +0 +0.1 +0.2 +0.3 +0.4 +0.5 +0.6 +0.7 +0.8 +Coverage +NBO 30 +NBO 60 +GP-BO 30 +GP-BO 60 +BKI-BO 30 +BKI-BO 60 +(c) Coverage +0 +10 +20 +30 +40 +50 +60 +Steps +0 +0.1 +0.2 +0.3 +0.4 +0.5 +0.6 +0.7 +0.8 +Coverage +NBO 30 +NBO 60 +batch GP 30 +batch GP 60 +batch BKI 30 +batch BKI 60 +(d) Coverage +Fig. 3. +Map entropy and coverage results of the synthetic structured map. +and cluttered rooms, as in Fig. 6. The map size is 24 m×14 m +with a resolution of 0.2 m. We use a simulated laser scanner +emitting 20 beams uniformly within a FOV of ±π/3 rad at +a maximum range of 4 m. The robot starts at [13, 57]m with +a −π/2 initial heading angle. The Nloop is set to 100. +Fig. 7 presents the comparative curves of map entropy +and coverage rates, whereas Fig. 7(a) shows the BKI-BO +methods have more rapid reduction rates of map entropy +after the exploration starts and arrive at relatively lower levels +than other methods, among them, BKI-BO 60 performs the +best. In this typical cluttered map, GP-BO methods perform +slightly inferior to our BKI-BO methods but almost catch +up with ours, which also are much better than the NBO +methods. The curves in Fig. 7(b) imply that batch GP and +batch BKI have similar performance. We also can get an +insight from Fig. 7(c) and (d), i.e., the coverage curves +of BKI-BO methods converge slightly earlier than GP-BO +methods and reach higher values, and all BO-based methods +explore the unknown place much faster than the NBO ones. +This result evidences our BKI methods are more suitable for +large cluttered environments. +C. Time Efficiency +We have presented the exploration results in the previous +simulations of typical scenes, and our BKI-based method +has shown desired exploration performance in efficiency and +accuracy compared with state-of-the-art methods. To put +more intuitive and specific comparison, we further analyze +the time cost of each method per exploration step in all +maps. As in Table I, the results show the time cost of the +whole exploration process per step in the form of means +and standard deviations, as well as the average percent +of evaluation and decision-making time spent by different +methods in each step. + +0 +50 +100 +150 +Steps +1800 +2000 +2200 +2400 +2600 +2800 +3000 +Map entropy (bits) +NBO 30 +NBO 60 +GP-BO 30 +GP-BO 60 +BKI-BO 30 +BKI-BO 60 +(a) +0 +50 +100 +150 +Steps +1800 +2000 +2200 +2400 +2600 +2800 +3000 +Map entropy (bits) +NBO 30 +NBO 60 +batch GP 30 +batch GP 60 +batch BKI 30 +batch BKI 60 +(b) +0 +50 +100 +150 +Steps +0 +0.2 +0.4 +0.6 +0.8 +1 +Coverage +NBO 30 +NBO 60 +GP-BO 30 +GP-BO 60 +BKI-BO 30 +BKI-BO 60 +(c) +0 +50 +100 +150 +Steps +0 +0.2 +0.4 +0.6 +0.8 +1 +Coverage +NBO 30 +NBO 60 +batch GP 30 +batch GP 60 +batch BKI 30 +batch BKI 60 +(d) +Fig. 4. +Map entropy and coverage results of the synthetic unstructured map. +TABLE I +TIME COST COMPARISON OF DIFFERENT EXPLORATION METHODS +Methods +Synthetic structured map +Synthetic unstructured map +Seattle map [32] +NBO 30 +95.29% / 10.4455 ± 0.9409 +96% / 12.1683 ± 1.3856 +96.95% / 4.8434 ± 0.7311 +NBO 60 +95.38% / 10.9967 ± 1.0676 +95.93% / 12.4971 ± 2.1583 +96.93% / 5.3502 ± 0.9009 +batch GP 30 +5.15% / 0.4387 ± 0.0246 +4.66% / 0.2805 ± 0.0232 +12.68% / 0.2134 ± 0.0169 +batch GP 60 +6.44% / 0.4444 ± 0.0487 +5.89% / 0.3021 ± 0.0362 +14.94% / 0.2291 ± 0.0226 +batch BKI 30 (ours) +3.05 / 0.4324 ± 0.0276 +2.93% / 0.2485 ± 0.0346 +7.67% / 0.2036 ± 0.0254 +batch BKI 60 (ours) +3.87% / 0.4407 ± 0.0384 +3.56% / 0.2731 ± 0.0356 +9.05% / 0.2065 ± 0.0229 +GP-BO 30 +49.71% / 0.9435 ± 0.0609 +48.09% / 0.6083 ± 0.1121 +67.97% / 0.5203 ± 0.0554 +GP-BO 60 +74.03% / 1.8265 ± 0.1189 +72.99% / 1.3558 ± 0.1190 +84.26% / 1.0528 ± 0.1124 +BKI-BO 30 (ours) +39% / 0.7518 ± 0.0683 +39.14% / 0.514 ± 0.0966 30 +54.03% / 0.3903 ± 0.1175 +BKI-BO 60 (ours) +53.74% / 0.9952 ± 0.1061 +54.31% / 0.7363 ± 0.1186 +62.45% / 0.4955 ± 0.1775 +Note: Time cost of inference per step (in percentage) / Total time cost of exploration per step of each method (in sec.) +0 +100 +200 +300 +400 +500 +-5 +0 +5 +MI Error (bits) +BKI prediction +GP prediction +Fig. 5. +A challenging example of MI prediction error comparison using +BKI and GP methods trained with fewer samples in a randomly selected +exploration step. +(a) Exploration trajectory +50 +100 +150 +200 +250 +X(grid) +20 +40 +60 +80 +100 +Y(grid) +0 +0.2 +0.4 +0.6 +(b) MI surface +Fig. 6. +An example of BKI-based robot exploration in the large cluttered +Seattle map [32]. White square: start point; White star: end point. +Among the 10 methods, the basic NBO methods have the +most expensive time consumption (more than about 8∼50 +times to BKI and GP methods) per step, while other methods +based on GP and BKI cost much less time, showing the +efficiency of Bayesian optimization-based approaches. We +can further analyze these results from 2 aspects of view. +From the top row to the bottom, our BKI-based methods get +better time efficiency performance of decision-making and +inference than the corresponding GP-based ones in all maps +0 +20 +40 +60 +80 +100 +Steps +1.22 +1.23 +1.24 +1.25 +1.26 +1.27 +1.28 +1.29 +1.3 +Map entropy (bits) +104 +NBO 30 +NBO 60 +GP-BO 30 +GP-BO 60 +BKI-BO 30 +BKI-BO 60 +(a) Map entropy rate +0 +20 +40 +60 +80 +100 +Steps +0 +0.02 +0.04 +0.06 +0.08 +0.1 +0.12 +0.14 +0.16 +Coverage +NBO 30 +NBO 60 +GP-BO 30 +GP-BO 60 +BKI-BO 30 +BKI-BO 60 +(b) Coverage rate +Fig. 7. +Map entropy and coverage results of the Seattle map results (batch +methods omitted). +when the number of samples increases, e.g. batch BKI 30/60 +vs batch GP 30/60 and BKI-BO 30/60 vs GP-BO 30/60. We +also can observe that BKI methods run faster than GP ones +when using more training epochs. BKI methods also bring +significant time savings for exploration, such as decreasing +by about 20% and 45% time compared with GP-BO 30 and +GP-BO 60 respectively in the structured map. +From the left column to the right, these above-mentioned +differences get more distinct in unstructured and large clut- +tered maps, e.g. the time costs per step of GP-BO 30 and GP- +BO 60 decrease by about 25% and 53% in the Seattle map +respectively, which also verifies our proposed BKI-based +robot exploration methods can improve the time efficiency +considerably without losing overall exploration performance +compared with other methods. + +0.140H50 +10C.C +0.2(grid +600.4800 +150 +X(gri0.6 +0.5100200 +d)0.725020VI. CONCLUSIONS +This paper mainly contributed to a new efficient learning- +based approach for information-theoretic robot exploration in +unknown environments. In particular, a continuous informa- +tion gain evaluation model for predicting the MI of numerous +sampled robot actions is built by introducing the Bayesian +kernel inference method. The time complexity of MI pre- +diction is decreased to logarithm level in comparison with +state-of-the-art methods. An objective function integrating +the predicted MI and uncertainty is also designed to balance +exploration and exploitation. The proposed method also +gets verified under an autonomous exploration framework +by extensive simulations of different scenes, which reveals +our method outperforms the greedy-based and GP-based +exploration methods overall in efficiency without loss of +exploration performance, especially in unstructured and large +cluttered scenes. Future work mainly involves studying the +exploration performance using different α values and kernels, +as well as extending our method to 3D scenes. +REFERENCES +[1] H. Azp´urua, M. F. M. Campos, and D. G. Macharet, “Three- +dimensional terrain aware autonomous exploration for subterranean +and confined spaces,” in 2021 IEEE International Conference on +Robotics and Automation (ICRA). +IEEE, 2021, pp. 2443–2449. +[2] J. Strader, K. Otsu, and A.-a. Agha-mohammadi, “Perception-aware +autonomous mast motion planning for planetary exploration rovers,” +Journal of Field Robotics, vol. 37, no. 5, pp. 812–829, 2020. +[3] P. Stankiewicz, Y. T. Tan, and M. Kobilarov, “Adaptive sampling with +an autonomous underwater vehicle in static marine environments,” +Journal of Field Robotics, vol. 38, no. 4, pp. 572–597, 2021. +[4] B. J. Julian, S. Karaman, and D. Rus, “On mutual information- +based control of range sensing robots for mapping applications,” The +International Journal of Robotics Research, vol. 33, no. 10, pp. 1375– +1392, 2014. +[5] B. Charrow, S. Liu, V. Kumar, and N. Michael, “Information-theoretic +mapping using cauchy-schwarz quadratic mutual information,” in 2015 +IEEE International Conference on Robotics and Automation (ICRA). +IEEE, 2015, pp. 4791–4798. +[6] Z. Zhang, T. Henderson, S. Karaman, and V. Sze, “FSMI: Fast +computation of shannon mutual information for information-theoretic +mapping,” The International Journal of Robotics Research, vol. 39, +no. 9, pp. 1155–1177, 2020. +[7] Y. Xu, R. Zheng, M. Liu, and S. Zhang, “CRMI: Confidence-rich +mutual information for information-theoretic mapping,” IEEE Robotics +and Automation Letters, vol. 6, no. 4, pp. 6434–6441, 2021. +[8] Y. Xu, R. Zheng, S. Zhang, and M. Liu, “Confidence-rich localization +and mapping based on particle filter for robotic exploration,” in 2022 +IEEE/RSJ International Conference on Intelligent Robots and Systems +(IROS). +IEEE, 2022, pp. 1–7. +[9] G. A. Hollinger and G. S. Sukhatme, “Sampling-based robotic infor- +mation gathering algorithms,” The International Journal of Robotics +Research, vol. 33, no. 9, pp. 1271–1287, 2014. +[10] M. G. Jadidi, J. V. Mir´o, and G. Dissanayake, “Sampling-based incre- +mental information gathering with applications to robotic exploration +and environmental monitoring,” The International Journal of Robotics +Research, vol. 38, no. 6, pp. 658–685, 2019. +[11] A. Bircher, M. Kamel, K. Alexis, H. Oleynikova, and R. Siegwart, +“Receding horizon” next-best-view” planner for 3d exploration,” in +2016 IEEE International Conference on Robotics and Automation +(ICRA). +IEEE, 2016, pp. 1462–1468. +[12] B. Charrow, V. Kumar, and N. Michael, “Approximate representations +for multi-robot control policies that maximize mutual information,” +Autonomous Robots, vol. 37, no. 4, pp. 383–400, 2014. +[13] K. Yang, S. Keat Gan, and S. Sukkarieh, “A gaussian process-based rrt +planner for the exploration of an unknown and cluttered environment +with a uav,” Advanced Robotics, vol. 27, no. 6, pp. 431–443, 2013. +[14] B. Charrow, G. Kahn, S. Patil, S. Liu, K. Goldberg, P. Abbeel, +N. Michael, and V. Kumar, “Information-theoretic planning with +trajectory optimization for dense 3d mapping.” in Robotics: Science +and Systems, vol. 11, 2015, pp. 3–12. +[15] R. Marchant and F. Ramos, “Bayesian optimisation for informative +continuous path planning,” in 2014 IEEE International Conference on +Robotics and Automation (ICRA). +IEEE, 2014, pp. 6136–6143. +[16] R. Oliveira, L. Ott, V. Guizilini, and F. Ramos, “Bayesian optimisation +for safe navigation under localisation uncertainty,” in International +Symposium of Robotics Research. +Springer, 2020, pp. 489–504. +[17] G. Francis, L. Ott, R. Marchant, and F. Ramos, “Occupancy map +building through bayesian exploration,” The International Journal of +Robotics Research, vol. 38, no. 7, pp. 769–792, 2019. +[18] S. Bai, J. Wang, K. Doherty, and B. Englot, “Inference-enabled +information-theoretic exploration of continuous action spaces,” in +International Symposium of Robotics Research. +Springer, 2015, pp. +419–433. +[19] S. Bai, J. Wang, F. Chen, and B. Englot, “Information-theoretic ex- +ploration with bayesian optimization,” in 2016 IEEE/RSJ International +Conference on Intelligent Robots and Systems (IROS). +IEEE, 2016, +pp. 1816–1822. +[20] S. Bai, F. Chen, and B. Englot, “Toward autonomous mapping and +exploration for mobile robots through deep supervised learning,” in +2017 IEEE/RSJ International Conference on Intelligent Robots and +Systems (IROS). +IEEE, 2017, pp. 2379–2384. +[21] F. Chen, J. D. Martin, Y. Huang, J. Wang, and B. Englot, “Autonomous +exploration under uncertainty via deep reinforcement learning on +graphs,” in 2020 IEEE/RSJ International Conference on Intelligent +Robots and Systems (IROS). +IEEE, 2020, pp. 6140–6147. +[22] T. Wang, R. Liao, J. Ba, and S. Fidler, “Nervenet: Learning structured +policy with graph neural networks,” in 2018 International Conference +on Learning Representations (ICLR), 2018. +[23] W. R. Vega-Brown, M. Doniec, and N. G. Roy, “Nonparametric +bayesian inference on multivariate exponential families,” Advances in +Neural Information Processing Systems, vol. 27, 2014. +[24] V. Peretroukhin, W. Vega-Brown, N. Roy, and J. Kelly, “PROBE-GK: +Predictive robust estimation using generalized kernels,” in 2016 IEEE +International Conference on Robotics and Automation (ICRA), 2016, +pp. 817–824. +[25] C. Richter, W. Vega-Brown, and N. Roy, “Bayesian learning for safe +high-speed navigation in unknown environments,” in International +Symposium on Robotics Research. +Springer, 2015, pp. 325–341. +[26] T. Shan, J. Wang, B. Englot, and K. Doherty, “Bayesian generalized +kernel inference for terrain traversability mapping,” in Conference on +Robot Learning. +PMLR, 2018, pp. 829–838. +[27] K. Doherty, T. Shan, J. Wang, and B. Englot, “Learning-aided 3-d +occupancy mapping with bayesian generalized kernel inference,” IEEE +Transactions on Robotics, vol. 35, no. 4, pp. 953–966, 2019. +[28] L. Gan, R. Zhang, J. W. Grizzle, R. M. Eustice, and M. Ghaffari, +“Bayesian spatial kernel smoothing for scalable dense semantic map- +ping,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 790– +797, 2020. +[29] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. MIT Press, +2005. +[30] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for +Machine Learning. +The MIT Press, 2005. +[31] S. T. O’Callaghan and F. T. Ramos, “Gaussian process occupancy +maps,” The International Journal of Robotics Research, vol. 31, no. 1, +pp. 42–62, 2012. +[32] A. Howard and N. Roy, “The robotics data set repository (radish),” +2003. [Online]. Available: http://radish.sourceforge.net/ + diff --git a/5NAyT4oBgHgl3EQfpPjF/content/tmp_files/load_file.txt b/5NAyT4oBgHgl3EQfpPjF/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..45480fcb645f68929a999c05decd442da18d75e4 --- /dev/null +++ b/5NAyT4oBgHgl3EQfpPjF/content/tmp_files/load_file.txt @@ -0,0 +1,677 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf,len=676 +page_content='Bayesian Generalized Kernel Inference for Exploration of Autonomous Robots Yang Xu, Student Member, IEEE, Ronghao Zheng†, Member, IEEE, Senlin Zhang, Member, IEEE, and Meiqin Liu, Senior Member, IEEE Abstract— This paper concerns realizing highly efficient information-theoretic robot exploration with desired perfor- mance in complex scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' We build a continuous lightweight inference model to predict the mutual information (MI) and the associated prediction confidence of the robot’s candidate actions which have not been evaluated explicitly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' This allows the decision-making stage in robot exploration to run with a logarithmic complexity approximately, this will also benefit online exploration in large unstructured, and cluttered places that need more spatial samples to assess and decide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' We also develop an objective function to balance the local optimal action with highest MI value and the global choice with high prediction variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Extensive numerical and dataset simulations show the desired efficiency of our proposed method without losing exploration performance in different environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' We also provide our open-source implementation codes released on GitHub for the robot community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' INTRODUCTION Robot exploration gains its prevalence recently in pri- ori unknown environments such as subterranean, marine, and planetary tasks [1]–[3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Among the literature, state- of-the-art exploration methods prefer to use information- theoretic metrics in each iteration, such as Shannon mutual information (MI) [4] and its derivatives [5]–[8], to evaluate the information gain brought by candidate control actions accurately and choose and execute the most informative action, thus the exploration problem becomes a sequential optimal decision-making one naturally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' A typical exploration example is in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Intuitively, the way to tackle this problem is to use a greedy strategy and add more candidate actions, including sampled nodes [9], [10], available viewpoints [11], [12], or special motion primitives [13], [14], in the discrete ac- tion space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' However, the exploration performance of greedy selection is closely related to the discrete sampling reso- lution/method of action space over the map grid, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=', a coarse resolution may lead to sub-optimal actions/paths, and a fine one may generate more samples and be more likely to choose the optimal action, but the computational cost of the information gain evaluation of all candidate actions will become expensive in this case since the forward 1Yang Xu, Ronghao Zheng and Senlin Zhang are with the College of Electrical Engineering, Zhejiang University, Hangzhou 310027, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' {xuyang94,rzheng,slzhang}@zju.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='cn 2Meiqin Liu is with the Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an 710049, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' liumeiqin@zju.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='cn 3All authors are also with the State Key Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou 310027, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' †Corresponding author (a) 20 40 60 80 100 120 X(grid) 20 40 60 Y(grid) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='6 (b) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' MI-based active robot exploration in an unknown unstructured environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (a) Informative trajectory and the resulting occupancy map, (b) Resulting MI surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Note that the coincident yellow squares mean the start and end points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' A minimum information threshold is set to select more informative exploration actions, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' the middle left and top right areas are less informative than the threshold and thus unexplored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Note that the scale of MI is in [0,1] bit in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' simulation in the evaluation requires extensive raycasting and MI calculation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Notably, these consequences will be more distinct in 3D environments because the increased dimension needs much more samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' In this paper, we aim to realize a more efficient and accurate approach to find the most informative action without evaluating all candidate actions exhaustively and expensively in robot exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Specifically, our main contributions are three-fold: 1) We propose a Bayesian kernel spatial MI inference method to construct a continuous surrogate evaluation model between robot actions and MI values using only partial ex- plicitly evaluated samples, which can perform highly efficient MI prediction of control actions in logarithm time;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 2) We develop a reward function comprising the predicted MI values and uncertainties to find the best action for realiz- ing the trade-off between exploration and exploitation, which has been validated in numerical and dataset simulations;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 3) Meanwhile, we release an open-source implementation of our proposed method here1 for the robotics community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The paper organization is as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Related works about the recent learning-based robot exploration methods are pre- sented in Section II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' We formulate the problem in Section III and present our Bayesian kernel-based MI inference method in Section IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Simulation results using synthetic data and the real world dataset and discussions are given in Section V, followed by conclusions in Section VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' RELATED WORK In the context of robot exploration, supervised learning techniques provide a powerful tool to find the global op- 1https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='com/Shepherd-Gregory/BKI-exploration arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='00523v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='RO] 2 Jan 2023 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='12020C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='C 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2(grid) 400.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='440 60 X(gric0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='56080 )0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='7100 120timum approximately by training predictive models using minor parts of actions in continuous action spaces, without evaluating the objective function expensively, which also has better interpretability in black-box inference [15]–[17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' In [18], Bai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' used the Gaussian process (GP) to model the relationship between control actions and the explicitly evaluated MI for the robot exploring priori unknown areas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' In [19], they further introduced Bayesian optimization (BO) into the information-theoretic robot exploration to optimize the GP prediction in multiple iterations, which provides rapid map entropy reduction and ensures computational efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Generally, BO assumes a prior distribution on the objective function and constructs predictive models to describe the underlying relationship between robot actions and their MI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' It also assesses the acquisition function derived from the GP prior and samples, then chooses the next query point maximizing the acquisition function and balancing the trade- off between exploration (global) and exploitation (local).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Iteratively, BO presents more precise results on the posterior distribution as the observations (training samples) increase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Rather than evaluating discrete viewpoints, Francis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [17] modeled the autonomous exploration and mapping task as a constrained BO aiming to find optimal continuous paths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' However, the main bottleneck of the above BO-based robot exploration methods is that the number of the training actions N will affect the resulting prediction accuracy directly, as well as the computational cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' That implies one needs to pay expensive computations to achieve higher exploration performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Typically, updating and querying the GP mod- els (the engine behind BO) have an overall O(N 3) time complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' This compromises the inference efficiency and real-time performance of robot exploration tasks inevitably, especially in large-scale and 3D scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' More recently, deep neural networks (DNNs) have been introduced to realize predicting optimal sensing actions more efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Bai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [20] trained the DNN with plenty of randomly generated 2D maps to generate suggested action and ensure inferring in constant time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Graph neural networks (GNNs) have also been combined with reinforcement learn- ing methods to learn the best action from an exploration graph, rather than metric maps or visual images [21], [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Nevertheless, the neural network-based robot exploration methods require numerous training samples beforehand and are also limited to the adaptability and generalization ca- pability in different environments, which may need further studies in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Encouragingly, the Bayesian kernel inference (BKI) tech- nique proposed in [23] gives us a chance to perform ef- ficient exact inference on a simplified model, rather than approximating inference on an exact generative model (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' GP) expensively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' BKI extends local kernel estimation to Bayesian inference for exponential family likelihood func- tions, enabling only O(log Nq) (Nq: the number of querying samples) run time for inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' These significant merits enhance BKI’s application in robotics, including sensor un- certainty estimation [24], high-speed navigation [25], as well as environment mapping using sparse sensor measurements such as terrain traversability mapping [26], 3D occupancy mapping [27], semantic mapping [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Motivated by [19] and [23], use BKI to infer the spatial MI in an efficient and closed-form way for the control actions whose MI values have not been explicitly evaluated via expensive computation (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [4]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Our method keeps similar accuracy to previous approaches compared with existing works such as [18] and [19], but shows more efficient and suitable performance for complex scenes requiring numerous explicitly evaluated samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' PRELIMINARIES AND NOTIONS In this paper, for simplicity of discussion, we mainly consider the information-theoretic exploration using a mobile robot equipped with a beam-based range sensor of limited field of view (FOV) in a 2D environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The results here can also be extended to 3D cases expediently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Information-Theoretic Exploration Generally, the robot generates a set of candidate actions Xaction in the robot’s feasible configuration space X ⊆ SE(2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' We also assume this configuration space has been discretized by a fixed resolution over the 2D static grid map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The set of values m ∈ [0, 1] is the occupancy level over the independent grid cells and can be updated and queried by the classic log-odds method [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The occupancy value of an unobserved map cell ξ is assumed to be uniform, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=', p(mξ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Here we use the classic Shannon MI [4] as the information measure of candidate configuration xi = [px i , py i , ψi] ∈ Xaction, where px i and py i denote the robot’s position on the map, and ψi denotes the heading angle of the robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' From the view of information theory, the expected information gain of xi can be evaluated by the current map entropy and conditional entropy given a new measurement at xi: I(m;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' xi) = H(m) − H(m|xi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (1) The aim of information-theoretic robot exploration is to select the best action xbest maximizing the expected MI: xbest = argmax x∈Xaction I(m;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' xi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (2) Notably, the MI of each configuration can be decomposed over independent beams and then to cells via raycasting, then accumulated MI over cells to approximate, which owns a squared time complexity in map resolution λm at worst [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' This also brings more evaluation costs for robot exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Bayesian Generalized Kernel Inference Consider a supervised learning-based inference problem on predictive stochastic models p(y|x) given a sequence of N observations D = {(x = {xi}, y = {yi})}N i=1, where x and y represent the set of evaluated configurations and the resulting MI values I(m;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' x), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The main objective is to infer the posterior distribution p(y∗|x∗, D) for the target inputs x∗ to be evaluated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' This problem can be solved by associating latent parameters θ = {θi}N i=1 ∈ Θ with input x in the latent space Θ, where the likelihood p(y|θ) is known.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Thus the inference on y∗ can be formulated as an inference on target parameters θ∗ related to x∗: p(y∗|x∗, D) = � Θ p(y∗|θ∗)p(θ∗|x∗, D)dθ∗, (3) where the posterior distribution of the latent variables can be characterized using Bayes’ rule: p(θ∗|x∗, D) ∝ � Θ �N i=1 p(yi|θi)p(θ1:N, θ∗|x1:N, x∗)dθ1:N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' By strongly assuming latent parameters θ1:N are con- ditionally independent given the target parameters θ∗: p(θ1:N, θ∗|a1:N, x∗) = �N i=1 p(θi|θ∗, xi, x∗)p(θ∗|x∗), one can marginalize the latent variables θ1:N and then obtain p(θ∗|x∗, D) ∝ �N i=1 p(yi|θ∗, xi, x∗)p(θ∗|x∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' BKI further defines a distribution that has a special smoothness constraint and bounded Kullback-Leibler diver- gence (KLD) DKL(g||f) between the extended likelihood p(yi|θ∗, xi, x∗) represented by g and the likelihood p(yi|θi) represented by f, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=', the maximum entropy distribution g satisfying DKL(g||f) ≤ ρ(x∗, x) has the form g(y) ∝ f(y)k(x∗,x), where ρ(·, ·) : X × X → R+ is a smoothness bound and k(·, ·) : X ×X → [0, 1] is a kernel function which can be uniquely determined by ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Substituting into Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (3), we can get: p(θ∗|x∗, D) ∝ N � i=1 p(yi|θ∗)k(x∗,x)p(θ∗|x∗) (4) Thus the posterior distribution can be exactly inferred by using the likelihood from the exponential family and assuming the corresponding conjugate prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' BAYESIAN KERNEL INFERENCE FOR ROBOT EXPLORATION To efficiently evaluate the exact MI of unknown robot configurations sampled in the spatial action space, we solve this problem by a Bayesian kernel inference way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Bayesian Kernel Spatial MI Inference As mentioned in Section III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='B,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' we assume the underlying likelihood model between the MI values y and the latent parameters θ follows Gaussian distribution with unknown mean µ ∈ RN and fixed,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' known covariance Σ: p(y|µ) = N(µ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Σ),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Σ = diag(σ2) ∈ RN×N,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (5) thus its conjugate prior can also be described by a Gaussian distribution using the hyperparameter ζ and target samples input x∗: p(µ|x∗) = N � µ0(x∗),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 1 ζ(x∗)Σ(x∗) � ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (6) where µ0 and ζ are the initial belief of the mean and the uncertainty of that belief,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' ζ = 0 means no confidence and ζ → ∞ indicates full prior knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Here we assume ζ is a quite small positive constant since we do not have much prior information about the belief when exploring unknown areas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Therefore, we can substitute Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (6) and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (5) into Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (4) given observations D: p(µ∗|x∗, D) ∝ N � i=1 exp � −1 2 (yi − µi)2 σ2 k(x∗, xi) � (7) exp � −1 2 (µi − µ0)2 σ2 ζ � , and the posterior over mean and covariance of the MI can be derived as follows: I(x∗) = E[y∗|x∗, D] = E[µ∗|x∗, D] = y + ζµ0 ζ + k ≃ y k , σI(x∗) = V[µ∗|x∗, D] = Σ ζ + k ≃ Σ k , (8) where y and k can be computed by kernel functions: k = ΣN i=1k(x∗, x), y = ΣN i=1k(x∗, x)yi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (9) Give a set of observations D evaluated explicitly as the input, then we can easily compute the MI and the corre- sponding confidence for the test spatial configurations x∗ by using Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (8) and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Kernel Selection The kernel function of the BKI method will directly affect the computational efficiency and accuracy, thus selecting an appropriate kernel is quite significant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' In [26]–[28], the chosen sparse kernels remove the training points far away from the queried points, which allows efficient and exact evaluation (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' occupancy, traversability, semantic class) over the observations in logarithm run time using k-d trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Unlike the sufficient training data obtained from onboard sensors in mapping tasks, robot exploration always generates and evaluates relatively fewer candidate configurations in a limited space at each time instance, so there is no need to reject the rare training samples in robot exploration tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Among the exponential kernel functions, we prefer the Mat´ern kernel for its capability of handling sudden transi- tions of terrain [30], [31], since the potential obstacles and unknown structures in application scenes that have never been seen before will vary the MI values greatly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The typical Mat´ern kernel function is as follows: k(x∗, x) = 21−ν Γ(ν) ( √ 2νr ℓ )νKν( √ 2νr ℓ ), r = ||x∗−x||, (10) where the positive parameters ν and ℓ are the smoothness constant and characteristic length scale respectively, Γ(·) and Kν are the gamma and modified Bassel function, respec- tively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' In practice, we choose a Mat´ern 3/2 kernel (ν = 3/2) with the form as k(x∗, x) = (1 + √ 3r ℓ ) exp(− √ 3r ℓ ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' BKI-based Robot Exploration In robot exploration, we expect the robot moves toward the places with high predicted MI values to maximize the information gain locally, but this greedy “exploration” may lead to undesired paths or even worse such as getting stuck in cluttered areas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Instead, the unexplored places with high predicted uncertainty are also worth exploring, since they may guide an optimal path for the robot globally in a prior unknown area, which is also characterized as “exploitation”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Therefore, we integrate the prediction confidence of MI values with the predicted MI to realize a trade-off between the exploration and exploitation, then we can get the sug- gested action maximizing the information objective function based on Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (2) and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (8): xs = argmax x∈Xaction αI(m;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' xi) + (1 − α)σI(xi), (11) where α ∈ [0, 1] is the trade-off factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The autonomous exploration framework based on our BKI MI inference method is given in Algorithm 1, where Algorithm 2 is the BKI optimization module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Algorithm 1 BKI Exploration( ) Require: Occupancy map at kth time step mk, previous robot poses xhist = x0:k−1 and current pose xk, the number of explicit evaluated samples N, information threshold Ith, the number of querying samples Nq, while-loop counts limit Nloop 1: iter = 0 2: while xhist ̸= ∅ AND iter < Nloop do 3: iter = iter + 1 4: // Sample N training actions 5: x ← Sampling(xk, mk, N);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 6: // Evaluate these actions explicitly Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (1) 7: for each xi ∈ x do 8: mvirtual ← Raycasting(xi, mk);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 9: Ii ← ComputeMI(mvirtual);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 10: y ← y ∪ Ii;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 11: end for 12: x∗ ← Sampling(xk, mk, Nq);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 13: // Find the suggested action using Algorithm 2 14: {xbest, Ibest} ← BKIOptimization({x, y}, x∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 15: if max(Ibest) > Ith then 16: xk+1 ← xbest(MaxInfoIndex);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 17: xhist ← xhist ∪ xk+1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 18: else 19: xk+1 ← xk−1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' // Back to previous action 20: Remove xk−1 from xhist;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 21: end if 22: // Execute the action and update the map 23: Plocal ← Astar(xk, xk+1) // Plan local path by A* 24: mk+1 ← OccupancyGridMapping(Plocal);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 25: end while Proposition 1 The time complexity of our proposed method at each while-loop step in Algorithm 1 is: O(NNzN 2 c ) � �� � explicit MI evaluation + O(NepochN log Nq) � �� � BKI MI inference (12) where Nepoch is the number of training epoch, Nz and Nc are the numbers of beams per sensor scan, and the number of cells that a beam intersects with the grid map at worst, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Algorithm 2 BKI Optimization( ) Require: Training set D = {(xi, yi)}N i=1, current action set to be evaluated x∗, training epoch Nepoch, factor α 1: xbest ← {}, Ibest ← {};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 2: for each epoch do 3: // Compute the kernel function using Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (11) 4: k ← KernelFunction(x∗, x);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 5: // Compute MI and uncertainty using Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (8) 6: k ← Σk, y ← k · y;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 7: I∗ ← y/k, σ∗ I ← Σ/k;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 8: ObjFunc ← αI∗ + (1 − α)σ∗ I;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 9: xs = max(ObjFunc);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 10: if xs ∈ x then 11: xbest ← xbest ∪ xs, Ibest ← Ibest ∪ ys 12: else 13: // Evaluate MI explicitly using Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (1) 14: Is = CalculateMI(xs);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 15: // Add into D 16: xbest ← xbest ∪ xs, x ← x ∪ xs;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 17: Ibest ← Ibest ∪ Is, y ← y ∪ Is;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 18: end if 19: end for 20: return xbest, Ibest Significantly, the GP-based robot exploration in [18] and BO-based method in [19] have the same time cost of ours in explicit MI evaluation, but these two methods have computa- tional complexities of O(N 3 +N 2Nq) and O(Nepoch(N 3 + N 2Nq)) to perform the expensive GP inference for MI, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' This comparative theoretic result indicates our BKI-based exploration method outperforms the GP-based methods in time efficiency, especially in large-scale and cluttered places which need more samples N and Nq to evaluate rapidly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' RESULTS AND DISCUSSIONS In this section, we run numerical simulations and dataset experiments on a desktop PC with a 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='6 GHz Intel i3- 9100F CPU and 32G RAM to verify the effectiveness of proposed BKI-based robot exploration method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The infor- mation threshold is Ith = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='05 bit and the trade-off factor is α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' We adopt a Mat´ern kernel for GP and the kernel parameters are ℓ = 1 and ν = 3/2 for all simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' We also choose the parameters of ζ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='001 and σ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='01 for BKI method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The robot poses are assumed to be known and the robot’s candidate actions are sampled uniformly in the FOV of range sensors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' We conduct 20 Monte Carlo trials for all maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' We use greedy-based optimization (named “NBO” in simulations), batch GP with only 1 epoch for optimization (“bacth GP”) [18], and GP-based BO with multiple epochs (“GP-BO”) [19] to compare with our methods, one named “bacth BKI” with only 1 optimization epoch and another one named “BKI-BO” with multiple epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Meanwhile, to validate the time efficiencies, we apply 2 cases of N = 30 and N = 60 samples for each method, where GP-BO 30 Informative trajectory in occupancy map 20 40 60 80 100 120 X(grid) 10 20 30 40 50 60 70 Y(grid) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='7 (a) Informative trajectory MI surface 20 40 60 80 100 120 X(grid) 10 20 30 40 50 60 70 Y(grid) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='6 (b) MI surface Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' An example of BKI-based robot exploration in an unknown structured environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Yellow square: start point;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' yellow star: end point;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' red line: robot direction at each action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' and BKI-BO 30 use Nepoch = 15 iterations, GP-BO 60 and BKI-BO 60 use 30 epochs in BKI optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' We also set Nq = 8N in all simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Synthetic Environments Results To simulate the indoor and field scenes, we generate 2 24 m × 14 m synthetic maps, one structured maze map surrounded by several walls (shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 2, Nloop = 50), and one unstructured map consisting of circles and ellipses (shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 1, Nloop = 150).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The map resolutions are both 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2 m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The simulated range sensor has a FOV of ±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='5 rad with a resolution of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='05 rad, and a maximum sensing range of 6 m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The robot is initially at [1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2 m, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2 m] with 0 rad heading and trying to explore the prior unknown map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The representative resulting paths maximizing the information objective function are in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 1 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The qualitative results of structured and unstructured maps are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 3 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 4, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' To compare the exploration performance using different methods intuitively, we present the evolution of map entropy and coverage rate of each method in the figures, where the solid and dashed lines depict the means of Monte Carlo trials for each method, and the shaded regions represent the standard deviations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 3 shows the BKI and GP methods have similar performance to the NBO methods since this structured scene is relatively small and simple, especially in the beginning stage where there is only one corridor to move forward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Differently, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 4 indicates that the NBO methods spend more time (about 50∼70 steps) to converge and end the exploration, while BKI and GP methods complete the ex- ploration with comparable entropy reduction and coverage rates to NBOs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Moreover, as in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 5, we use the explicitly evaluated MI as the ground truth and compute the MI prediction errors using BKI-BO and GP-BO methods with small training samples in a randomly selected step, which implies the BKI-based approach can resemble the GP-based one in MI inference accuracy when facing challenging cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' In short, these results validate that our BKI methods have competitive properties with GP-based exploration ones in the typical structured and unstructured scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Dataset Results To test our method in a more complex environment,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' we choose the Seattle map [32] containing narrow long corridors 0 10 20 30 40 50 60 Steps 2000 2200 2400 2600 2800 3000 Map entropy (bits) NBO 30 NBO 60 GP-BO 30 GP-BO 60 BKI-BO 30 BKI-BO 60 (a) Map entropy 0 10 20 30 40 50 60 Steps 2000 2200 2400 2600 2800 3000 Map entropy (bits) NBO 30 NBO 60 batch GP 30 batch GP 60 batch BKI 30 batch BKI 60 (b) Map entropy 0 10 20 30 40 50 60 Steps 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='8 Coverage NBO 30 NBO 60 GP-BO 30 GP-BO 60 BKI-BO 30 BKI-BO 60 (c) Coverage 0 10 20 30 40 50 60 Steps 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='8 Coverage NBO 30 NBO 60 batch GP 30 batch GP 60 batch BKI 30 batch BKI 60 (d) Coverage Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Map entropy and coverage results of the synthetic structured map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' and cluttered rooms, as in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The map size is 24 m×14 m with a resolution of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2 m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' We use a simulated laser scanner emitting 20 beams uniformly within a FOV of ±π/3 rad at a maximum range of 4 m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The robot starts at [13, 57]m with a −π/2 initial heading angle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The Nloop is set to 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 7 presents the comparative curves of map entropy and coverage rates, whereas Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 7(a) shows the BKI-BO methods have more rapid reduction rates of map entropy after the exploration starts and arrive at relatively lower levels than other methods, among them, BKI-BO 60 performs the best.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' In this typical cluttered map, GP-BO methods perform slightly inferior to our BKI-BO methods but almost catch up with ours, which also are much better than the NBO methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The curves in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 7(b) imply that batch GP and batch BKI have similar performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' We also can get an insight from Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 7(c) and (d), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=', the coverage curves of BKI-BO methods converge slightly earlier than GP-BO methods and reach higher values, and all BO-based methods explore the unknown place much faster than the NBO ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' This result evidences our BKI methods are more suitable for large cluttered environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Time Efficiency We have presented the exploration results in the previous simulations of typical scenes, and our BKI-based method has shown desired exploration performance in efficiency and accuracy compared with state-of-the-art methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' To put more intuitive and specific comparison, we further analyze the time cost of each method per exploration step in all maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' As in Table I, the results show the time cost of the whole exploration process per step in the form of means and standard deviations, as well as the average percent of evaluation and decision-making time spent by different methods in each step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 0 50 100 150 Steps 1800 2000 2200 2400 2600 2800 3000 Map entropy (bits) NBO 30 NBO 60 GP-BO 30 GP-BO 60 BKI-BO 30 BKI-BO 60 (a) 0 50 100 150 Steps 1800 2000 2200 2400 2600 2800 3000 Map entropy (bits) NBO 30 NBO 60 batch GP 30 batch GP 60 batch BKI 30 batch BKI 60 (b) 0 50 100 150 Steps 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='8 1 Coverage NBO 30 NBO 60 GP-BO 30 GP-BO 60 BKI-BO 30 BKI-BO 60 (c) 0 50 100 150 Steps 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='8 1 Coverage NBO 30 NBO 60 batch GP 30 batch GP 60 batch BKI 30 batch BKI 60 (d) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Map entropy and coverage results of the synthetic unstructured map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' TABLE I TIME COST COMPARISON OF DIFFERENT EXPLORATION METHODS Methods Synthetic structured map Synthetic unstructured map Seattle map [32] NBO 30 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='29% / 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='4455 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='9409 96% / 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='1683 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='3856 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='95% / 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='8434 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='7311 NBO 60 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='38% / 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='9967 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0676 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='93% / 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='4971 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='1583 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='93% / 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='3502 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='9009 batch GP 30 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='15% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='4387 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0246 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='66% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2805 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0232 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='68% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2134 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0169 batch GP 60 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='44% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='4444 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0487 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='89% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='3021 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0362 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='94% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2291 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0226 batch BKI 30 (ours) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='05 / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='4324 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0276 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='93% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2485 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0346 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='67% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2036 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0254 batch BKI 60 (ours) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='87% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='4407 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0384 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='56% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2731 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0356 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='05% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2065 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0229 GP-BO 30 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='71% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='9435 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0609 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='09% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='6083 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='1121 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='97% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='5203 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0554 GP-BO 60 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='03% / 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='8265 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='1189 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='99% / 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='3558 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='1190 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='26% / 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0528 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='1124 BKI-BO 30 (ours) 39% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='7518 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0683 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='14% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='514 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='0966 30 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='03% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='3903 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='1175 BKI-BO 60 (ours) 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='74% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='9952 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='1061 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='31% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='7363 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='1186 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='45% / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='4955 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='1775 Note: Time cost of inference per step (in percentage) / Total time cost of exploration per step of each method (in sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=') 0 100 200 300 400 500 5 0 5 MI Error (bits) BKI prediction GP prediction Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' A challenging example of MI prediction error comparison using BKI and GP methods trained with fewer samples in a randomly selected exploration step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' (a) Exploration trajectory 50 100 150 200 250 X(grid) 20 40 60 80 100 Y(grid) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='6 (b) MI surface Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' An example of BKI-based robot exploration in the large cluttered Seattle map [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' White square: start point;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' White star: end point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Among the 10 methods, the basic NBO methods have the most expensive time consumption (more than about 8∼50 times to BKI and GP methods) per step, while other methods based on GP and BKI cost much less time, showing the efficiency of Bayesian optimization-based approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' We can further analyze these results from 2 aspects of view.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' From the top row to the bottom, our BKI-based methods get better time efficiency performance of decision-making and inference than the corresponding GP-based ones in all maps 0 20 40 60 80 100 Steps 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='22 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='23 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='24 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='26 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='27 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='28 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='29 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='3 Map entropy (bits) 104 NBO 30 NBO 60 GP-BO 30 GP-BO 60 BKI-BO 30 BKI-BO 60 (a) Map entropy rate 0 20 40 60 80 100 Steps 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='16 Coverage NBO 30 NBO 60 GP-BO 30 GP-BO 60 BKI-BO 30 BKI-BO 60 (b) Coverage rate Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Map entropy and coverage results of the Seattle map results (batch methods omitted).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' when the number of samples increases, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' batch BKI 30/60 vs batch GP 30/60 and BKI-BO 30/60 vs GP-BO 30/60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' We also can observe that BKI methods run faster than GP ones when using more training epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' BKI methods also bring significant time savings for exploration, such as decreasing by about 20% and 45% time compared with GP-BO 30 and GP-BO 60 respectively in the structured map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' From the left column to the right, these above-mentioned differences get more distinct in unstructured and large clut- tered maps, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' the time costs per step of GP-BO 30 and GP- BO 60 decrease by about 25% and 53% in the Seattle map respectively, which also verifies our proposed BKI-based robot exploration methods can improve the time efficiency considerably without losing overall exploration performance compared with other methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='140H50 10C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='C 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='2(grid 600.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='4800 150 X(gri0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='5100200 d)0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='725020VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' CONCLUSIONS This paper mainly contributed to a new efficient learning- based approach for information-theoretic robot exploration in unknown environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' In particular, a continuous informa- tion gain evaluation model for predicting the MI of numerous sampled robot actions is built by introducing the Bayesian kernel inference method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The time complexity of MI pre- diction is decreased to logarithm level in comparison with state-of-the-art methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' An objective function integrating the predicted MI and uncertainty is also designed to balance exploration and exploitation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The proposed method also gets verified under an autonomous exploration framework by extensive simulations of different scenes, which reveals our method outperforms the greedy-based and GP-based exploration methods overall in efficiency without loss of exploration performance, especially in unstructured and large cluttered scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Future work mainly involves studying the exploration performance using different α values and kernels, as well as extending our method to 3D scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' REFERENCES [1] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Azp´urua, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Campos, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Macharet, “Three- dimensional terrain aware autonomous exploration for subterranean and confined spaces,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' IEEE, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 2443–2449.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [2] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Strader, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Otsu, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='-a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Agha-mohammadi, “Perception-aware autonomous mast motion planning for planetary exploration rovers,” Journal of Field Robotics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 37, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 812–829, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [3] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Stankiewicz, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Tan, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Kobilarov, “Adaptive sampling with an autonomous underwater vehicle in static marine environments,” Journal of Field Robotics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 38, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 572–597, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [4] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Julian, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Karaman, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Rus, “On mutual information- based control of range sensing robots for mapping applications,” The International Journal of Robotics Research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 33, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 1375– 1392, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [5] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Charrow, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Liu, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Kumar, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Michael, “Information-theoretic mapping using cauchy-schwarz quadratic mutual information,” in 2015 IEEE International Conference on Robotics and Automation (ICRA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' IEEE, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 4791–4798.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [6] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Zhang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Henderson, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Karaman, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Sze, “FSMI: Fast computation of shannon mutual information for information-theoretic mapping,” The International Journal of Robotics Research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 39, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 1155–1177, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [7] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Xu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Zheng, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Liu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Zhang, “CRMI: Confidence-rich mutual information for information-theoretic mapping,” IEEE Robotics and Automation Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 6, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 6434–6441, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [8] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Xu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Zheng, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Zhang, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Liu, “Confidence-rich localization and mapping based on particle filter for robotic exploration,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' IEEE, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 1–7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [9] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Hollinger and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Sukhatme, “Sampling-based robotic infor- mation gathering algorithms,” The International Journal of Robotics Research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 33, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 1271–1287, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [10] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Jadidi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Mir´o, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Dissanayake, “Sampling-based incre- mental information gathering with applications to robotic exploration and environmental monitoring,” The International Journal of Robotics Research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 38, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 658–685, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [11] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Bircher, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Kamel, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Alexis, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Oleynikova, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Siegwart, “Receding horizon” next-best-view” planner for 3d exploration,” in 2016 IEEE International Conference on Robotics and Automation (ICRA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' IEEE, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 1462–1468.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [12] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Charrow, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Kumar, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Michael, “Approximate representations for multi-robot control policies that maximize mutual information,” Autonomous Robots, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 37, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 383–400, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [13] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Yang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Keat Gan, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Sukkarieh, “A gaussian process-based rrt planner for the exploration of an unknown and cluttered environment with a uav,” Advanced Robotics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 27, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 431–443, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [14] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Charrow, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Kahn, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Patil, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Liu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Goldberg, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Abbeel, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Michael, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Kumar, “Information-theoretic planning with trajectory optimization for dense 3d mapping.” in Robotics: Science and Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 11, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 3–12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [15] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Marchant and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Ramos, “Bayesian optimisation for informative continuous path planning,” in 2014 IEEE International Conference on Robotics and Automation (ICRA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' IEEE, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 6136–6143.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [16] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Oliveira, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Ott, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Guizilini, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Ramos, “Bayesian optimisation for safe navigation under localisation uncertainty,” in International Symposium of Robotics Research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Springer, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 489–504.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [17] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Francis, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Ott, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Marchant, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Ramos, “Occupancy map building through bayesian exploration,” The International Journal of Robotics Research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 38, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 769–792, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [18] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Bai, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Wang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Doherty, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Englot, “Inference-enabled information-theoretic exploration of continuous action spaces,” in International Symposium of Robotics Research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Springer, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 419–433.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [19] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Bai, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Wang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Chen, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Englot, “Information-theoretic ex- ploration with bayesian optimization,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' IEEE, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 1816–1822.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [20] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Bai, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Chen, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Englot, “Toward autonomous mapping and exploration for mobile robots through deep supervised learning,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' IEEE, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 2379–2384.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [21] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Martin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Huang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Wang, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Englot, “Autonomous exploration under uncertainty via deep reinforcement learning on graphs,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' IEEE, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 6140–6147.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [22] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Wang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Liao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Ba, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Fidler, “Nervenet: Learning structured policy with graph neural networks,” in 2018 International Conference on Learning Representations (ICLR), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [23] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Vega-Brown, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Doniec, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Roy, “Nonparametric bayesian inference on multivariate exponential families,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 27, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [24] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Peretroukhin, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Vega-Brown, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Roy, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Kelly, “PROBE-GK: Predictive robust estimation using generalized kernels,” in 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 817–824.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [25] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Richter, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Vega-Brown, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Roy, “Bayesian learning for safe high-speed navigation in unknown environments,” in International Symposium on Robotics Research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Springer, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 325–341.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [26] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Shan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Wang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Englot, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Doherty, “Bayesian generalized kernel inference for terrain traversability mapping,” in Conference on Robot Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' PMLR, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 829–838.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [27] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Doherty, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Shan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Wang, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Englot, “Learning-aided 3-d occupancy mapping with bayesian generalized kernel inference,” IEEE Transactions on Robotics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 35, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 953–966, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [28] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Gan, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Grizzle, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Eustice, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Ghaffari, “Bayesian spatial kernel smoothing for scalable dense semantic map- ping,” IEEE Robotics and Automation Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 5, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 790– 797, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [29] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Thrun, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Burgard, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Fox, Probabilistic Robotics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' MIT Press, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [30] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Rasmussen and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Williams, Gaussian Processes for Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' The MIT Press, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [31] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' O’Callaghan and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Ramos, “Gaussian process occupancy maps,” The International Journal of Robotics Research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 31, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' 42–62, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [32] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Howard and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Roy, “The robotics data set repository (radish),” 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content=' Available: http://radish.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='sourceforge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} +page_content='net/' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAyT4oBgHgl3EQfpPjF/content/2301.00523v1.pdf'} diff --git a/6tE4T4oBgHgl3EQfCAsz/content/tmp_files/2301.04856v1.pdf.txt b/6tE4T4oBgHgl3EQfCAsz/content/tmp_files/2301.04856v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..952cc542d2434dba76b1063a1ac5d5dc2afdeb71 --- /dev/null +++ b/6tE4T4oBgHgl3EQfCAsz/content/tmp_files/2301.04856v1.pdf.txt @@ -0,0 +1,13443 @@ +Multimodal Deep Learning +arXiv:2301.04856v1 [cs.CL] 12 Jan 2023 + + +Contents +Preface +v +Foreword +1 +1 +Introduction +3 +1.1 +Introduction to Multimodal Deep Learning +. . . . . . . . . . +3 +1.2 +Outline of the Booklet . . . . . . . . . . . . . . . . . . . . . . +4 +2 +Introducing the modalities +7 +2.1 +State-of-the-art in NLP +. . . . . . . . . . . . . . . . . . . . . +9 +2.2 +State-of-the-art in Computer Vision +. . . . . . . . . . . . . . +33 +2.3 +Resources and Benchmarks for NLP, CV and multimodal tasks +54 +3 +Multimodal architectures +83 +3.1 +Image2Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . +86 +3.2 +Text2Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . +100 +3.3 +Images supporting Language Models . . . . . . . . . . . . . . +125 +3.4 +Text supporting Vision Models . . . . . . . . . . . . . . . . . +146 +3.5 +Models for both modalities +. . . . . . . . . . . . . . . . . . . +159 +4 +Further Topics +181 +4.1 +Including Further Modalities +. . . . . . . . . . . . . . . . . . . 181 +4.2 +Structured + Unstructured Data . . . . . . . . . . . . . . . . . 197 +4.3 +Multipurpose Models +. . . . . . . . . . . . . . . . . . . . . . +209 +4.4 +Generative Art . . . . . . . . . . . . . . . . . . . . . . . . . . +226 +5 +Conclusion +235 +6 +Epilogue +237 +6.1 +New influential architectures +. . . . . . . . . . . . . . . . . . . 237 +6.2 +Creating videos +. . . . . . . . . . . . . . . . . . . . . . . . . +238 +7 +Acknowledgements +239 +iii + + +Preface +Author: Matthias Aßenmacher +FIGURE 1: LMU seal (left) style-transferred to Van Gogh’s Sunflower +painting (center) and blended with the prompt - Van Gogh, sunflowers - +via CLIP+VGAN (right). +In the last few years, there have been several breakthroughs in the methodolo- +gies used in Natural Language Processing (NLP) as well as Computer Vision +(CV). Beyond these improvements on single-modality models, large-scale multi- +modal approaches have become a very active area of research. +In this seminar, we reviewed these approaches and attempted to create a solid +overview of the field, starting with the current state-of-the-art approaches in +the two subfields of Deep Learning individually. Further, modeling frameworks +are discussed where one modality is transformed into the other Chapter 3.1 +and Chapter 3.2), as well as models in which one modality is utilized to +enhance representation learning for the other (Chapter 3.3 and Chapter 3.4). +To conclude the second part, architectures with a focus on handling both +modalities simultaneously are introduced (Chapter 3.5). Finally, we also cover +other modalities (Chapter 4.1 and Chapter 4.2) as well as general-purpose +multi-modal models (Chapter 4.3), which are able to handle different tasks on +different modalities within one unified architecture. One interesting application +(Generative Art, Chapter 4.4) eventually caps off this booklet. +v + +vi +Preface +FIGURE 2: Creative Commons License +This +book +is +licensed +under +the +Creative +Commons +Attribution- +NonCommercial-ShareAlike 4.0 International License. + +cC +000Foreword +Author: Matthias Aßenmacher +This book is the result of an experiment in university teaching. We were +inspired by a group of other PhD Students around Christoph Molnar, who +conducted another seminar on Interpretable Machine Learning in this format. +Instead of letting every student work on a seminar paper, which more or less +isolated from the other students, we wanted to foster collaboration between +the students and enable them to produce a tangible outout (that isn’t written +to spend the rest of its time in (digital) drawers). In the summer term 2022, +some Statistics, Data Science and Computer Science students signed up for +our seminar entitled “Multimodal Deep Learning” and had (before kick-off +meeting) no idea what they had signed up for: Having written an entire book +by the end of the semester. +We were bound by the examination rules for conducting the seminar, but +otherwise we could deviate from the traditional format. We deviated in several +ways: +1. +Each student project is a chapter of this booklet, linked contentwise +to other chapers since there’s partly a large overlap between the +topics. +2. +We gave challenges to the students, instead of papers. The challenge +was to investigate a specific impactful recent model or method from +the field of NLP, Computer Vision or Multimodal Learning. +3. +We designed the work to live beyond the seminar. +4. +We emphasized collaboration. Students wrote the introduction to +chapters in teams and reviewed each others individual texts. +Technical Setup +The book chapters are written in the Markdown language. The simulations, +data examples and visualizations were created with R (R Core Team, 2018). To +combine R-code and Markdown, we used rmarkdown. The book was compiled +1 + +2 +0 Foreword +with the bookdown package. We collaborated using git and github. For details, +head over to the book’s repository. + +1 +Introduction +Author: Nadja Sauter +Supervisor: Matthias Aßenmacher +1.1 +Introduction to Multimodal Deep Learning +There are five basic human senses: hearing, touch, smell, taste and sight. +Possessing these five modalities, we are able to perceive and understand the +world around us. Thus, “multimodal” means to combine different channels +of information simultaneously to understand our surroundings. For example, +when toddlers learn the word “cat”, they use different modalities by saying +the word out loud, pointing on cats and making sounds like “meow”. Using the +human learning process as a role model, artificial intelligence (AI) researchers +also try to combine different modalities to train deep learning models. On a +superficial level, deep learning algorithms are based on a neural network that +is trained to optimize some objective which is mathematically defined via the +so-called loss function. The optimization, i.e. minimizing the loss, is done via +a numerical procedure called gradient descent. Consequently, deep learning +models can only handle numeric input and can only result in a numeric output. +However, in multimodal tasks we are often confronted with unstructured data +like pictures or text. Thus, the first major problem is how to represent the +input numerically. The second issue with regard to multimodal tasks is how +exactly to combine different modalities. For instance, a typical task could be +to train a deep learning model to generate a picture of a cat. First of all, +the computer needs to understand the text input “cat” and then somehow +translate this information into a specific image. Therefore, it is necessary to +identify the contextual relationships between words in the text input and the +spatial relationships betweent pixels in the image output. What might be easy +for a toddler in pre-school, is a huge challenge for the computer. Both have to +learn some understanding of the word “cat” that comprises the meaning and +appearance of the animal. A common approach in modern deep learning is to +generate embeddings that represent the cat numerically as a vector in some +latent space. However, to achieve this, different approaches and algorithmic +3 + +4 +1 Introduction +architectures have been developed in recent years. This book gives an overview +of the different methods used in state-of-the-art (SOTA) multimodal deep +learning to overcome challenges arising from unstructured data and combining +inputs of different modalities. +1.2 +Outline of the Booklet +Since multimodal models often use text and images as input or output, methods +of Natural Language Processing (NLP) and Computer Vision (CV) are intro- +duced as foundation in Chapter 2. Methods in the area of NLP try to handle +text data, whereas CV deals with image processing. With regard to NLP (sub- +section 2.1), one concept of major importance is the so-called word embedding, +which is nowadays an essential part of (nearly) all multimodal deep learning +architectures. This concept also sets the foundation for transformer-based +models like BERT (Devlin et al., 2018a), which achieved a huge improvement +in several NLP tasks. Especially the (self-)attention mechanism (Vaswani et al., +2017a) of transformers revolutionized NLP models, which is why most of them +rely on the transformer as a backbone. In Computer Vision (subsection 2.2) +different network architectures, namely ResNet (He et al., 2015), EfficientNet +(Tan and Le, 2019a), SimCLR (Chen et al., 2020a) and BYOL (Grill et al., +2020b), will be introduced. In both fields it is of great interest to compare the +different approaches and their performance on challenging benchmarks. For +this reason, the last subsection 2.3 of Chapter 2 gives an overall overview of +different data sets, pre-training tasks and benchmarks for CV as well as for +NLP. +The second Chapter (see 3) focuses on different multimodal architectures, +covering a wide variety of how text and images can be combined. The presented +models combine and advance different methods of NLP and CV. First of all, +looking at Img2Text tasks (subsection 3.1), the data set Microsoft COCO for +object recognition (Lin et al., 2014a) and the meshed-memory transformer for +Image Captioning (M2 Transformer) (Cornia et al., 2019) will be presented. +Contrariwise, researchers developed methods to generate pictures based on a +short text prompt (subsection 3.2). The first models accomplishing this task +were generative adversarial networks (GANs) (Goodfellow et al., 2014b) and +Variational Autoencoders (VAEs) (Kingma and Welling, 2019). These methods +were improved in recent years and today’s SOTA transformer architectures and +text-guided diffusion models like DALL-E (Ramesh et al., 2021a) and GLIDE +(Nichol et al., 2021a) achieve remarkable results. Another interesting question +is how images can be utilized to support language models (subsection 3.3). This +can be done via sequential embeddings, more advanced grounded embeddings +or, again, inside transformers. On the other hand, one can also look at text + +1.2 Outline of the Booklet +5 +supporting CV models like CLIP (Radford et al., 2021b), ALIGN (Jia et al., +2021a) and Florence (Yuan et al., 2021) (subsection 3.4). They use foundation +models meaning reusing models (e.g. CLIP inside DALL-E 2) as well as a +contrastive loss for connecting text with images. Besides, zero-shooting makes +it possible to classify new and unseen data without expensive fine-tuning. +Especially the open-source architecture CLIP (Radford et al., 2021b) for image +classification and generation attracted a lot of attention last year. In the end +of the second chapter, some further architectures to handle text and images +simultaneously are introduced (subsection 3.5). For instance, Data2Vec uses +the same learning method for speech, vision and language and in this way aims +to find a general approach to handle different modalities in one architecture. +Furthermore, VilBert (Lu et al., 2019a) extends the popular BERT architecture +to handle both image and text as input by implementing co-attention. This +method is also used in Google’s Deepmind Flamingo (Alayrac et al., 2022). In +addition, Flamingo aims to tackle multiple tasks with a single visual language +model via few-shot learning and freezing the pre-trained vision and language +model. +In the last chapter (see 4), methods are introduced that are also able to handle +modalities other than text and image, like e.g. video, speech or tabular data. +The overall goal here is to find a general multimodal architecture based on +challenges rather than modalities. Therefore, one needs to handle problems +of multimodal fusion and alignment and decide whether you use a join or +coordinated representation (subsection 4.1). Moreover we go more into detail +about how exactly to combine structured and unstructured data (subsection +4.2). Therefore, different fusion strategies which evolved in recent years will be +presented. This is illustrated in this book by two use cases in survival analysis +and economics. Besides this, another interesting research question is how to +tackle different tasks in one so called multi-purpose model (subsection 4.3) like +it is intended to be created by Google researchers (Barham et al., 2022) in +their “Pathway” model. Last but not least, we show one exemplary application +of Multimodal Deep Learning in the arts scene where image generation models +like DALL-E (Ramesh et al., 2021a) are used to create art pieces in the area +of Generative Arts (subsection 4.4). + + +2 +Introducing the modalities +Authors: Cem Akkus, Vladana Djakovic, Christopher Benjamin Marquardt +Supervisor: Matthias Aßenmacher +Natural Language Processing (NLP) has existed for about 50 years, but +it is more relevant than ever. There have been several breakthroughs in +this branch of machine learning that is concerned with spoken and written +language. For example, learning internal representations of words was one of the +greater advances of the last decade. Word embeddings (Mikolov et al. (2013a), +Bojanowski et al. (2016)) made it possible and allowed developers to encode +words as dense vectors that capture their underlying semantic content. In this +way, similar words are embedded close to each other in a lower-dimensional +feature space. Another important challenge was solved by Encoder-decoder +(also called sequence-to-sequence) architectures Sutskever et al. (2014), which +made it possible to map input sequences to output sequences of different lengths. +They are especially useful for complex tasks like machine translation, video +captioning or question answering. This approach makes minimal assumptions +on the sequence structure and can deal with different word orders and active, +as well as passive voice. +A definitely significant state-of-the-art technique is Attention Bahdanau et al. +(2014), which enables models to actively shift their focus – just like humans +do. It allows following one thought at a time while suppressing information +irrelevant to the task. As a consequence, it has been shown to significantly +improve performance for tasks like machine translation. By giving the decoder +access to directly look at the source, the bottleneck is avoided and at the +same time, it provides a shortcut to faraway states and thus helps with the +vanishing gradient problem. One of the most recent sequence data modeling +techniques is Transformers (Vaswani et al. (2017b)), which are solely based on +attention and do not have to process the input data sequentially (like RNNs). +Therefore, the deep learning model is better in remembering context-induced +earlier in long sequences. It is the dominant paradigm in NLP currently and +even makes better use of GPUs, because it can perform parallel operations. +Transformer architectures like BERT (Devlin et al., 2018b), T5 (Raffel et al., +2019a) or GPT-3 (Brown et al., 2020) are pre-trained on a large corpus and can +be fine-tuned for specific language tasks. They have the capability to generate +stories, poems, code and much more. With the help of the aforementioned +7 + +8 +2 Introducing the modalities +breakthroughs, deep networks have been successful in retrieving information +and finding representations of semantics in the modality text. In the next +paragraphs, developments for another modality image are going to be presented. +Computer vision (CV) focuses on replicating parts of the complexity of the +human visual system and enabling computers to identify and process objects +in images and videos in the same way that humans do. In recent years it has +become one of the main and widely applied fields of computer science. However, +there are still problems that are current research topics, whose solutions depend +on the research’s view on the topic. One of the problems is how to optimize +deep convolutional neural networks for image classification. The accuracy +of classification depends on width, depth and image resolution. One way to +address the degradation of training accuracy is by introducing a deep residual +learning framework (He et al., 2015). On the other hand, another less common +method is to scale up ConvNets, to achieve better accuracy is by scaling up +image resolution. Based on this observation, there was proposed a simple yet +effective compound scaling method, called EfficientNets (Tan and Le, 2019a). +Another state-of-the-art trend in computer vision is learning effective visual +representations without human supervision. Discriminative approaches based +on contrastive learning in the latent space have recently shown great promise, +achieving state-of-the-art results, but the simple framework for contrastive +learning of visual representations, which is called SimCLR, outperforms pre- +vious work (Chen et al., 2020a). However, another research proposes as an +alternative a simple “swapped” prediction problem where we predict the code +of a view from the representation of another view. Where features are learned +by Swapping Assignments between multiple Views of the same image (SwAV) +(Caron et al., 2020). Further recent contrastive methods are trained by reducing +the distance between representations of different augmented views of the same +image (‘positive pairs’) and increasing the distance between representations +of augmented views from different images (‘negative pairs’). Bootstrap Your +Own Latent (BYOL) is a new algorithm for self-supervised learning of image +representatios (Grill et al., 2020b). +Self-attention-based architectures, in particular, Transformers have become +the model of choice in natural language processing (NLP). Inspired by NLP +successes, multiple works try combining CNN-like architectures with self- +attention, some replacing the convolutions entirely. The latter models, while +theoretically efficient, have not yet been scaled effectively on modern hardware +accelerators due to the use of specialized attention patterns. Inspired by the +Transformer scaling successes in NLP, one of the experiments is applying a +standard Transformer directly to the image (Dosovitskiy et al., 2020b). Due +to the widespread application of computer vision, these problems differ and +are constantly being at the center of attention of more and more research. +With the rapid development in NLP and CV in recent years, it was just a +question of time to merge both modalities to tackle multi-modal tasks. The + +2.1 State-of-the-art in NLP +9 +release of DALL-E 2 just hints at what one can expect from this merge in +the future. DALL-E 2 is able to create photorealistic images or even art from +any given text input. So it takes the information of one modality and turns it +into another modality. It needs multi-modal datasets to make this possible, +which are still relatively rare. This shows the importance of available data +and the ability to use it even more. Nevertheless, all modalities are in need +of huge datasets to pre-train their models. It’s common to pre-train a model +and fine-tune it afterwards for a specific task on another dataset. For example, +every state-of-the-art CV model uses a classifier pre-trained on an ImageNet +based dataset. The cardinality of the datasets used for CV is immense, but +the datasets used for NLP are of a completely different magnitude. BERT uses +the English Wikipedia and the Bookscorpus to pre-train the model. The latter +consists of almost 1 billion words and 74 million sentences. The pre-training of +GPT-3 is composed of five huge corpora: CommonCrawl, Books1 and Books2, +Wikipedia and WebText2. Unlike language model pre-training that can leverage +tremendous natural language data, vision-language tasks require high-quality +image descriptions that are hard to obtain for free. Widely used pre-training +datasets for VL-PTM are Microsoft Common Objects in Context (COCO), +Visual Genome (VG), Conceptual Captions (CC), Flickr30k, LAION-400M +and LAION-5B, which is now the biggest openly accessible image-text dataset. +Besides the importance of pre-training data, there must also be a way to +test or compare the different models. A reasonable approach is to compare +the performance on specific tasks, which is called benchmarking. A nice fea- +ture of benchmarks is that they allow us to compare the models to a human +baseline. Different metrics are used to compare the performance of the mod- +els. Accuracy is widely used, but there are also some others. For CV the +most common benchmark datasets are ImageNet, ImageNetReaL, CIFAR- +10(0), OXFORD-IIIT PET, OXFORD Flower 102, COCO and Visual Task +Adaptation Benchmark (VTAB). The most common benchmarks for NLP are +General Language Understanding Evaluation (GLUE), SuperGLUE, SQuAD +1.1, SQuAD 2.0, SWAG, RACE, ReCoRD, and CoNLL-2003. VTAB, GLUE +and SuperGLUE also provide a public leader board. Cross-modal tasks such as +Visual Question Answering (VQA), Visual Commonsense Reasoning (VCR), +Natural Language Visual Reasoning (NLVR), Flickr30K, COCO and Visual +Entailment are common benchmarks for VL-PTM. +2.1 +State-of-the-art in NLP +Author: Cem Akkus +Supervisor: Matthias Aßenmacher + +10 +2 Introducing the modalities +2.1.1 +Introduction +Natural Language Processing (NLP) exists for about 50 years, but it is more +relevant than ever. There have been several breakthroughs in this branch +of machine learning that is concerned with spoken and written language. In +this work, the most influential ones of the last decade are going to be pre- +sented. Starting with word embeddings, which efficiently model word semantics. +Encoder-decoder architectures represent another step forward by making mini- +mal assumptions about the sequence structure. Next, the attention mechanism +allows human-like focus shifting to put more emphasis on more relevant parts. +Then, the transformer applies attention in its architecture to process the +data non-sequentially, which boosts the performance on language tasks to +exceptional levels. At last, the most influential transformer architectures are +recognized before a few current topics in natural language processing are +discussed. +2.1.2 +Word Embeddings +As mentioned in the introduction, one of the earlier advances in NLP is +learning word internal representations. Before that, a big problem with text +modelling was its messiness, while machine learning algorithms undoubtedly +prefer structured and well-defined fixed-length inputs. On a granular level, the +models rather work with numerical than textual data. Thus, by using very +basic techniques like one-hot encoding or bag-of-words, a text is converted +into its equivalent vector of numbers without losing information. +In the example depicting one-hot encoding (see Figure 2.1), there are ten +simple words and the dark squares indicate the only index with a non-zero +value. +FIGURE 2.1: Ten one-hot encoded words (Source: Pilehvar and Camacho- +Collados (2021)) +In contrast, there are multiple non-zero values while using bag-of-words, which +is another way of extracting features from text to use in modelling where we +measure if a word is present from a vocabulary of known words. It is called + +{basket:1, +desk +fork:2, +desk:3, +desks +cloud:4, +plate:5, +plate +rabbit:6, +desks:7, +tree:8, +table:9, +lion:10) +table2.1 State-of-the-art in NLP +11 +bag-of-words because the order is disregarded here. +Treating words as atomic units has some plausible reasons, like robustness and +simplicity. It was even argued that simple models on a huge amount of data +outperform complex models trained on less data. However, simple techniques +are problematic for many tasks, e.g. when it comes to relevant in-domain data +for automatic speech recognition. The size of high-quality transcribed speech +data is often limited to just millions of words, so simply scaling up simpler +models is not possible in certain situations and therefore more advanced +techniques are needed. Additionally, thanks to the progress of machine +learning techniques, it is realistic to train more complex models on massive +amounts of data. Logically, more complex models generally outperform basic +ones. Other disadvantages of classic word representations are described by the +curse of dimensionality and the generalization problem. The former becomes a +problem due to the growing vocabulary equivalently increasing the feature +size. This results in sparse and high-dimensional vectors. The latter occurs +because the similarity between words is not captured. Therefore, previously +learned information cannot be used. Besides, assigning a distinct vector to +each word is a limitation, which becomes especially obvious for languages with +large vocabularies and many rare words. +To combat the downfalls of simple word representations, word embeddings +enable to use efficient and dense representations in which similar words have a +similar encoding. So words that are closer in the vector space are expected to +be similar in meaning. An embedding is hereby defined as a vector of floating +point values (with the length of the vector being a hyperparameter). The +values for the embedding are trainable parameters which are learned similarly +to a model learning the weights for a dense layer. The dimensionality of the +word representations is typically much smaller than the number of words in +the dictionary. For example, Mikolov et al. (2013a) called dimensions between +50-100 modest for more than a few hundred million words. For small data +sets, dimensionality for the word vectors could start at 8 and go up to 1024 +for larger data sets. It is expected that higher dimensions can rather pick up +intricate relationships between words if given enough data to learn from. +For any NLP tasks, it is sensible to start with word embeddings because +it allows to conveniently incorporate prior knowledge into the model and +can be seen as a basic form of transfer learning. It is important to note +that even though embeddings attempt to represent the meaning of words +and do that to an extent, the semantics of the word in a given context +cannot be captured. This is due to the words having static precomputed +representations in traditional embedding techniques. Thus, the word +"bank" can either refer to a financial institution or a river bank. Contex- + +12 +2 Introducing the modalities +FIGURE 2.2: Three-dimensional word embeddings (Source: Pilehvar and +Camacho-Collados (2021)). +tual embedding methods offer a solution, but more about them will follow later. +It should be noted that words can have various degrees of similarity. In the +context of inflectional languages, it becomes obvious because words are adjusted +to articulate grammatical categories. For example, in a subspace of the original +vector, nouns that have similar endings can be found. However, it even exceeds +simple syntactic regularities. With straightforward operations on the word +vectors, it can be displayed that vector(King)−vector(Man)+vector(Woman) +equals a vector that is closest in vector space (and therefore in meaning) to +the word "Queen". A simple visualization of this relationship can be seen +in the left graph below (see Figure 2.3). The three coordinate systems are +representations of higher dimensions that are depicted in this way via dimension +reduction techniques. Furthermore, the verb-to-tense relationship is expressed +in the middle graphic, which extends the insight from before referring to the +word endings being similar because in this instance the past tenses of both +verbs walking and swimming are not similar in structure. Additionally, on the +right side of the figure, there is a form of the commonly portrayed and easily +understood Country-Capital example (see Mikolov et al. (2013a)). +FIGURE 2.3: Three types of similarities as word embeddings (Source: Google +(2022)). + +desk +desks +desks +desk +plate +plate +table +tableItaly +Spain +Canada +man +walked +Turkey +woman +Rome +O +Ottawa +Madrid +Germany +king +swam +Russia +walking +Ankara +0 +queen +Berlin +Moscow + Japan +vietnam +swimming +China +Tokyo +Hanoi +Beijing +Male-Female +Verb Tense +Country-Capital2.1 State-of-the-art in NLP +13 +Another way of using vector representations of words is in the field of transla- +tions. It has been presented that relations can be drawn from feature spaces of +different languages. In below, the distributed word representations of numbers +between English and Spanish are compared. In this case, the same numbers +have similar geometric arrangements, which suggests that mapping linearly +between vector spaces of languages is feasible. Applying this simple method +for a larger set of translations in English and Spanish led to remarkable results +- achieving almost 90 % precision. +FIGURE 2.4: Representations of numbers in English and Spanish (Source: +Mikolov et al. (2013c)). +This technique was then used for other experiments. One use case is the +detection of dictionary errors. Taking translations from a dictionary and +computing their geometric distance returns a confidence measure. Closely +evaluating the translations with low confidence and outputting an alternative +(one that is closest in vector space) results in a plain way to assess dictionary +translations. Furthermore, training the word embeddings on a large corpora +makes it possible to give sensible out-of-dictionary predictions for words. +This was tested by randomly removing a part of the vocabulary before. +Taking a look at the predictions revealed that they were often to some +extent related to the translations with regard to meaning and semantics. +Despite the accomplishments in other tasks, translations between distant +languages exposed shortcomings of word embeddings. For example, the +accuracy for translations between English and Vietnamese seemed signif- +icantly lower. This can be ascribed to both languages not having a good +one-to-one correspondence because the concept of a word is different than +in English. In addition, the used Vietnamese model contains numerous syn- +onyms, which complicates making exact predictions (see Mikolov et al. (2013c)). +Turning the attention to one of the most impactful embedding techniques, +word2vec. It was proposed by Mikolov et al. (2013a) and is not a singular +algorithm. It can rather be seen as a family of model architectures and op- +timizations to learn word representations. Word2vec’s popularity also stems +from its success on multiple downstream natural language processing tasks. + +0.2 +O cuatro (four) +0.15 +O four +0.1 +Ouno (one) +0.1 +0.05 +oF +O cinco (five) +O five +Oone +oF +0.1 F +Otres (three) +0.05 +0.2 +0.1 +-0.3 +O three +0.15 +0.4 +0.2 +0.5 +O dos (two) +O two +0.25 +-0-8.2 +0.1 +0 +0.1 +0.2 +0.3 +0.4 +0.5 +0.6 +0.7 +0 +0.2 +0.4 +0.6 +0.8 +1 +1.2 14 +2 Introducing the modalities +It has a very simple structure which is based on a basic feed forward neural +network. They published multiple papers (see Mikolov et al. (2013a)], Mikolov +et al. (2013c), Mikolov et al. (2013d)) that are stemming around two different +but related methods for learning word embeddings (see Figure 2.5). Firstly, +the Continuous bag-of-words model aims to predict the middle word based on +surrounding context words. Hence, it considers components before and after +the target word. As the order of words in the context is not relevant, it is +called a bag-of-words model. Secondly, the Continuous skip-gram model only +considers the current word and predicts others within a range before and after +it in the same sentence. Both of the models use a softmax classifier for the +output layer. +FIGURE 2.5: CBOW and Skip-gram architecture (Source: Mikolov et al. +(2013a)). +Then, Bojanowski et al. (2016) built on skip-gram models by accounting for +the morphology (internal structure) of words. A different classical embedding +architecture that has to be at least mentioned is the GloVe model, which +does not use a neural network but incorporates local context information with +global co-occurrence statistics. +2.1.3 +Encoder-Decoder +The field of natural language processing is concerned with a variety of different +tasks surrounding text. Depending on the type of NLP problem, the network +may be confronted with variable length sequences as input and/or output. +This is the case for many compelling applications, such as question answering, +dialogue systems or machine translation. In the following, many examples +will explore machine translations in more detail, since it is a major problem +domain. Regarding translation tasks, it becomes obvious that input sequences +need to be mapped to output sequences of different lengths. To manage this + +INPUT +PROJECTION +OUTPUT +INPUT +PROJECTION +OUTPUT +w(t-2) +w(t-2) +w(t-1) +w(t-1) +SUM +w(t) +w(t) +w(t+1) +w(t+1) +w(t+2) +w(t+2) +CBOW +Skip-gram2.1 State-of-the-art in NLP +15 +type of input and output, a design with two main parts could be useful. +The first one is called the encoder because, in this part of the network, a +variable length input sequence is transformed into a fixed state. Next, the +second component called the decoder maps the encoded state to an output of +a variable length sequence. As a whole, it is known as an encoder-decoder or +sequence-to-sequence architecture and has become an effective and standard +approach for many applications which even recurrent neural networks with +gated hidden units have trouble solving successfully. Deep RNNs may have +a chance, but different architectures like encoder-decoder have proven to be +the most effective. It can even deal with different word orders and active, +as well as passive voice (Sutskever et al., 2014). A simplified example of the +encoder-decoder model can be seen in 2.6. +FIGURE 2.6: Translation through simplified seq2seq model (Source: Manning +et al. (2022)). +Before going through the equations quantifying the concepts, it makes sense +to examine the sequence-to-sequence design proposed by Cho et al. (2014). +An encoder-RNN processes the input sequence of length nx and computes a +fixed-length context vector C, which is usually the final hidden state of the +encoder or a simple function of the hidden states. After the input sequence is +processed, it is added to the hidden state and passed forward in time through +the recurrent connections between the hidden states in the encoder. Despite +the context vector usually being a simple function of the last hidden state, +its role cannot be underestimated. Specifically, the encoded state summarizes +important information from the input sequence, e.g. the intent in a question +answering task or the meaning of a text in the case of machine translation. +After the context is passed to every hidden state of the decoder, the decoder +RNN uses this information to produce the target sequence of length ny, which +can of course vary from nx. +At the latest through the above illustration, it is clear that the decoder is +particularly interesting to look at in the form of equations. The notation mainly +follows Cho et al. (2014). The decoder is another type of RNN which is trained +to predict the target based on the hidden state at the last time step. However, +unlike regular RNNs, it is also conditioned on the output of the last time +step (yt−1) and a summary of the input c. Therefore, the hidden state of the +decoder is computed by: + +0.5 +0.2 +-0.1 +Iamastudent- +Encoder +Decoder +>Je suis étudiant +-0.3 +0.4 +1.216 +2 Introducing the modalities +FIGURE 2.7: Encoder-decoder architecture (Source: Cho et al. (2014)). +h[t] +d = f(h[t−1] +d +, y[t−1], c). +Similarly, each conditional probability is given by the following, where f is +a non-linear activation function (and must produce probabilities in , e.g. the +softmax function): +P(y[t]|y[1], . . . , y[t−1], c) = f(h[t] +d , y[t−1], c). +The two parts are jointly trained to maximize the conditional log-likelihood, +where θ denotes the set of model parameters and (xn, yn) is an (input sequence, +output sequence) pair from the training set with size N: +max +θ +1 +N +N +� +n=1 +log pθ(yn|xn). +The best probability is usually found by using the beam search algorithm. +The core idea of it is that on each step of the decoder, we keep track of the k +most probable partial translations (which are called hypotheses). +Examining the translation presented in with hidden units unrolled through time +could look like in 2.8. In particular, multiple hidden layers are recommended +by the researchers. The idea is that lower layers compute lower-level features +and higher layers compute higher-level features. +Gated recurrent networks, especially long short-term memory networks, have + +Decoder +yt +y1 +C +X1 +X2 +XT +Encoder2.1 State-of-the-art in NLP +17 +FIGURE 2.8: Translation through seq2seq model (Source: Manning et al. +(2022)). +been found to be effective in both components of the sequence-to-sequence +architecture. Furthermore, it was revealed that deep LSTMs significantly +outperform shallow LSTMs. Each additional layer reduced perplexity by nearly +10%, possibly due to their much larger hidden state. For example, Sutskever +et al. (2014) used deep LSTMs with 4 layers and 1000 cells at each layer for +1000-dimensional word embeddings. Thus, in total, 8000 real numbers are +used to represent a sentence. For simplification, the neural networks are in the +following referred to as RNNs which is not contradicting the insights of this +paragraph as LSTMs are a type of gated RNNS (Sutskever et al., 2014). +2.1.4 +Attention +Although encoder-decoder architectures simplified dealing with variable length +sequences, they also caused complications. Due to their design, the encoding +of the source sentence is a single vector representation (context vector). The +problem is that this state must compress all information about the source +sentence in a single vector and is commonly referred to as the bottleneck +problem. To be precise, the entire semantics of arbitrarily long sentences +need to be wrapped into a single hidden state. Moreover, it constitutes a +different learning problem because the information needs to be passed between +numerous time steps. This leads to vanishing gradients within the network +as a consequence of factors less than 1 multiplied with each other at every +point. To illustrate, the last sentence is an ideal example of one in which an +encoder-decoder approach could have difficulty coping. In particular, if the +sentences are longer than the ones in the training corpus (Manning et al., 2022). + +target output words +Je +suisé +étudiant +sslayer +softmaxlayer +initial (zero) +hidden layer 2 +hidden layer 1 +embeddinglayer +am +a +student +Je +suis étudiant +source input words +target input words18 +2 Introducing the modalities +Due to the aforementioned reasons, an extension to the sequence-to-sequence +architecture was proposed by Bahdanau et al. (2014), which learns to +align and translate jointly. For every generated word, the model scans +through some positions in the source sentence where the most relevant +information is located. Afterwards, based on the context around and the +previously generated words, the model predicts the target word for the +current time step. This approach is called attention, as it emulates human-like +(cognitive) attention. As a result of directly looking at the source and +bypassing the bottleneck, it provides a solution to the problem. Then, it +mitigates the vanishing gradient problem, since there is now a shortcut +to faraway states. Consequently, incorporating the attention mechanism +has been shown to considerably boost the performance of models on NLP tasks. +A walkthrough of the example below should resolve any outstanding questions +regarding the procedure of the attention mechanism. The source sentence is +seen on the bottom left, which is given in French and acts as the input for the +encoder-RNN (in red). Then, the attention scores (in blue) are computed by +taking the dot product between the previous output word and input words. +Next, the softmax function turns the scores into a probability distribution (in +pink). They are used to take a weighted sum of the encoder’s hidden states +and form the attention output, which mostly contains information from the +hidden states that received high attention. Afterwards, the attention output +is concatenated with the decoder hidden state (in green), which is applied +to compute the decoder output as before. In some scenarios, the attention +output is also fed into the decoder (along with the usual decoder input). This +specific example was chosen because "entarter" means "to hit someone with a +pie" and is therefore a word that needs to be translated with many words. As +a consequence of no existing direct equivalents for this phrase, it is expected +that there is not only one nearly non-zero score. In this snapshot, the attention +distribution can be seen to have two significant contributors. +The following equations aim to compactly represent the relations brought +forward in the last paragraphs and mainly follow Manning et al. (2022). The +attention scores e[t] are computed by scalarly combining the hidden state of +the decoder with all of the hidden states of the encoder: +e[t] = [(h[t] +d )T h[1] +e , . . . , (h[t] +d )T h[N] +e +]. +Besides the basic dot-product attention, there are also other ways to calculate +the attention scores, e.g. through multiplicative or additive attention. Although +they will not be further discussed at this point, it makes sense to at least +mention them. Then, applying the softmax to the scalar scores results in the +attention distribution α[t], a probability distribution whose values sum up to +1: + +2.1 State-of-the-art in NLP +19 +FIGURE 2.9: Translation process with attention mechanism (Source: Man- +ning et al. (2022)). +α[t] = softmax(e[t]). +Next, the attention output a[t] is obtained by the attention distribution acting +as a weight for the encoder hidden states: +a[t] = +N +� +i=1 +α[t] +i he,i. +Concatenating attention output with decoder hidden state and proceeding as +in the non-attention sequence-to-sequence model are the final steps: +o[t] = f(a[t]h[t] +d ). +By visualizing the attention distribution, also called alignments (see Bahdanau +et al. (2014)), it is easy to observe what the decoder was focusing on and +understand why it chose a specific translation. The x-axis of the plot of below +corresponds to the words in the source sentence (English) and the y-axis to +the words in the generated translation (French). Each pixel shows the weight +of the source word for the respective target word in grayscale, where 0 is black +and 1 is white. As a result, which positions in the source sentence were more +relevant when generating the target word becomes apparent. As expected, +the alignment between English and French is largely monotonic, as the pixels +are brighter, and therefore the weights are higher along the main diagonal +of the matrix. However, there is an exception because adjectives and nouns + +Attention +me +output +distribution +Attention +J3 +Attention +scores +Decoder RNN +Encoder +目目目目 +RNN +il +m' +entarté + +he +hit +a +Source sentence (input)20 +2 Introducing the modalities +are typically ordered differently between the two languages. Thus, the model +(correctly) translated "European Economic Area" into "zone économique +européene". By jumping over two words ("European" and "Economic"), it +aligned "zone" with "area". Then, it looked one word back twice to perfect +the phrase "zone économique européene". Additional qualitative analysis has +shown that the model alignments are predominantly analogous to our intuition. +FIGURE 2.10: Attention alignments (Source: Bahdanau et al. (2014)). +2.1.5 +Transformer +For this section, Manning et al. (2022) constitutes the main source. +RNNs are unrolled from one side to the other. Thus, from left to right and +right to left. This encodes linear locality, which is a useful heuristic because +nearby words often affect each other’s meaning. But how is it when distant +words need to interact with each other? For instance, if we mention a person +at the beginning of a text portion and refer back to them only at the very +end, the whole text in between needs to be tracked back (see below). Hence, +RNNs take O(sequence length) steps for distant word pairs to interact. Due to +gradient problems, it is therefore hard to learn long-distance dependencies. In +addition, the linear order is ingrained. Even though, as known, the sequential +structure does not tell the whole story. +GPUs can perform multiple calculations simultaneously and could help to +reduce the execution time of the deep learning algorithm massively. However, +forward and backward passes lack parallelizability in recurrent models and have +O(sequence length). To be precise, future hidden states cannot be computed in +full before past states have been computed. This inhibits training on massive + +agreement +uropean +Economic +signed +August +end> +Area +992 +e +e +wa +S +uo +m +9 +E +S +L' +accord +sur +la +zone +économique +europeenne +e +été +signé +en +aout +1992 +2.1 State-of-the-art in NLP +21 +FIGURE 2.11: Sequential processing of recurrent model (Source: Manning +et al. (2022)). +data sets. indicates the minimum number of steps before the respective state +can be calculated. +FIGURE 2.12: Sequential processing of recurrent model with number of +steps indicated (Source: Manning et al. (2022)). +After proving that attention dramatically increases performance, google +researchers took it further and based transformers solely on attention, so +without any RNNs. For this reason, the paper in which they were introduced is +called "Attention is all you need". Spoiler: It is not quite all we need, but more +about that on the following pages. Transformers have achieved great results +on multiple settings such as machine translation and document generation. +Their parallelizability allows for efficient pretraining and leads them to be the +standard model architecture. In fact, all top models on the popular aggregate +benchmark GLUE are pretrained and Transformer-based. Moreover, they +have even shown promise outside of NLP, e.g. in Image Classification, Protein +Folding and ML for Systems (see Dosovitskiy et al. (2020a), Jumper et al. +(2021), Zhou et al. (2020), respectively). +If recurrence has its flaws, another adjustment of the attention mechanism +might be beneficial. Until now, it was defined from decoder to encoder. +Alternatively, attention could also be from one state to all states in the +same set. This is the definition of self-attention, which is encoder-encoder +or decoder-decoder attention (instead of encoder-decoder) and represents a +cornerstone of the transformer architecture. depicts this process in which each +word attends to all words in the previous layer. Even though in practice, most + +O(sequence length) +The chef who +ate1→2→3 +0 +1 +2 +h1 +h2 +ht22 +2 Introducing the modalities +arrows are omitted eventually. +FIGURE 2.13: Connections of classic attention mechanism (Source: Manning +et al. (2022)). +Thinking of self-attention as an approximate hash table eases understanding its +intuition. To look up a value, queries are compared against keys in a table. In +a hash table, which is shown on the left side of , there is exactly one key-value +pair for each query (hash). In contrast, in self-attention, each key is matched to +varying degrees by each query. Thus, a sum of values weighted by the query-key +match is returned. +FIGURE 2.14: Comparison of classic attention mechanism with self-attention +with hash tables (Source: Manning et al. (2022)). +The process briefly described in the last paragraph can be summarized by +the following steps that mainly follow Manning et al. (2022). Firstly, deriving +query, key, and value for each word xi is necessary: +qi = W Qxi, +ki = W Kxi, +vi = W V xi +Secondly, the attention scores have to be calculated: +eij = qikj + +2 +2 +2 +2 +2 +2 +2 +2 +attention +attention +embedding +0 +0 +0 +0 +0 +0 +0 +0 +h1 +h, +hko +Vo +ko +Vo +k1 +V1 +k1 +V1 +k2 +V2 +k2 +V2 +q +k3 +V3 +q +k3 +V3 +k4 +V4 +k4 +V4 +ks +Vs +ks +Vs +kg +V6 +k6 +V6 +kz +V7 +k7 +V72.1 State-of-the-art in NLP +23 +Thirdly, to normalize the attention scores, the softmax function is applied: +αij = softmax(eij) = exp(eij) +� +k +eij +Lastly, taking the weighted sum of the values results in obtaining the attention +output: +ai = +� +j +αijvj +Multiple advantages of incorporating self-attention instead of recurrences have +been revealed. Since all words interact at every layer, the maximum interaction +distance is O(1) and is a crucial upgrade. In addition, the model is deeply +bidirectional because each word attends to the context in both directions. As a +result of these advances, all word representations per layer can be computed in +parallel. Nevertheless, some issues have to be discussed. Attention does no more +than weighted averaging. So without neural networks, there are no element-wise +non-linearities. Their importance cannot be understated and shows why +attention is not actually all that is needed. Furthermore, bidirectionality is +not always desired. In language modelling, the model should specifically be +not allowed to simply look ahead and observe more than the objective allows. +Moreover, the word order is no longer encoder, and it is bag-of-words once again. +Fortunately, the previously mentioned weaknesses have been addressed for +the original transformer-architecture proposed by Vaswani et al. (2017c). +The first problem can be easily fixed by applying a feed forward layer to +the output of attention. It provides non-linear activation as well as extra +expressive power. Then, for cases in which bidirectionality contradicts the +learning objective, future states can be masked so that attention is restricted +to previous states. Moreover, the loss of the word can be corrected by adding +position representations to the inputs. +The more complex deep learning models are, the closer they become to model +the complexity of the real world. That is why the transformer encoder and +decoder consist of many layers of self-attention with a feed forward network, +which is necessary to extract both syntactic and semantic features from +sentences. Otherwise, using word embeddings, which are semantically deep +representations between words, would be unnecessary (Sejnowski, 2020). At +the same time, training deep networks can be troublesome. Therefore, some +tricks are applied to help with the training process. +One of them is to pass the "raw" embeddings directly to the next layer, which + +24 +2 Introducing the modalities +prevents forgetting or misrepresent important information as it is passed +through many layers. This process is called residual connections and is also +believed to smoothen the loss landscape. Additionally, it is problematic to +train the parameters of a given layer when its inputs keep shifting because +of layers beneath. Reducing uninformative variation by normalizing within +each layer to mean zero and standard deviation to one weakens this effect. +Another challenge is caused by the dot product tending to take on extreme +values because of the variance scaling with increasing dimensionality dk. It is +solved by Scaled Dot Product Attention (see Figure 2.15), which consists of +computing the dot products of the query with its keys, dividing them by the +dimension of keys √dk, and applying the softmax function next to receive the +weights of the values. +FIGURE 2.15: Scaled dot-product attention (Source: Vaswani et al. (2017c)). +Attention learns where to search for relevant information. Surely, attending to +different types of information in a sentence at once delivers even more promising +results. To implement this, the idea is to have multiple attention heads per layer. +While one attention head might learn to attend to tense information, another +might learn to attend to relevant topics. Thus, each head focuses on separate +features, and construct value vectors differently. Multi-headed self-attention +is implemented by simply creating n independent attention mechanisms and +combining their outputs. +At this point, every part that constitutes the encoder in the transformer +architecture has been introduced (see Figure 2.17). First, positional encodings +are included in the input embeddings. There are multiple options to realize +this step, e.g. through sinusoids. The multi-head attention follows, which was +just mentioned. "Add & Norm" stands for the residual connections and the +normalization layer. A feed forward network follows, which is also accompanied +by residual connections and a normalization layer. All of it is repeated n +times. For the decoder, the individual components are similar. One difference +is that the outputs go through masked multi-head attention before multi-head +attention and the feed forward network (with residual connections and layer +normalization). It is critical to ensure that the decoder cannot peek at the + +MatMul +SoftMax +个 +Mask (opt.) +个 +Scale +MatMul +个个 +Q +K +V2.1 State-of-the-art in NLP +25 +FIGURE 2.16: Multi-head attention (Source: Vaswani et al. (2017c)). +future. To execute this, the set of keys and queries could be modified at every +time step to only include past words. However, it would be very inefficient. +Instead, to enable parallelization, future states are masked by setting the +attention scores to −∞. After the decoder process is also repeated n times, +a linear layer is added to project the embeddings into a larger vector that +has the length of the vocabulary size. At last, a softmax layer generates a +probability distribution over the possible words. +FIGURE 2.17: Transformer architecture (Source: Vaswani et al. (2017c)). + +Linear +Concat +个个 +Scaled Dot-Product +h +Attention +V +K +QOutput +Probabilities +Softmax +Linear +Add & Norm +Feed +Forward +Add & Norm +Add & Norm +Multi-Head +Feed +Attention +Forward +Nx +Add & Norm +Nx +Add & Norm +Masked +Multi-Head +Multi-Head +Attention +Attention +Positional +Positional +Encoding ++ +Encoding +Input +Output +Embedding +Embedding +Inputs +Outputs +(shifted right)26 +2 Introducing the modalities +2.1.6 +Transformer architectures: BERT, T5, GPT-3 +"You shall know a word by the company it keeps", an adage by linguist John +Rupert Firth from 1957 goes. Even earlier, in 1935, he stated that "... the +complete meaning of a word is always contextual, and no study of meaning +apart from a complete context can be taken seriously". The quotes of the +famous linguist sum up the motivation to learn word meaning and context +perfectly. Many years later, in 2017, pretraining word embeddings started. +However, some complications arise from solely pretraining the first part of the +network. For instance, to teach the model all contextual aspects of language, +the training data for the downstream task (e.g. question answering) needs +to be adequate. Additionally, most of the parameters are usually randomly +initialized. presents the network discussed, in which the word "movie" gets the +same embedding irrespective of the sentence it appears in. On the contrary, +parameters in modern NLP architectures are initialized via pretraining (see +Figure 2.18). Furthermore, during the pretraining, certain input parts are +hidden to train the model to reconstruct them. This leads to building suitable +parameter initializations and robust probability distributions over language. +FIGURE 2.18: Partly pre-trained model (Source: Manning et al. (2022)). +Classic machine learning does not match human learning. Specifically referring +to training a model from scratch, and only being able to learn from the training +data. In contrast, human beings already have prior knowledge they can apply to +new tasks. Transfer learning emulates this by using an already trained network. +The main idea is to use a model that was pretrained on a hard, general language +understanding task using endless amounts of data, so that, it eventually con- +tains the best possible approximation of language understanding. Afterwards, +the training data for the new task is applied to slightly modify the weights of + +Not pretrained +pretrained +(word embeddings) +.. the movie was ..2.1 State-of-the-art in NLP +27 +FIGURE 2.19: Jointly pre-trained model (Source: Manning et al. (2022)). +the pretrained model, which is referred to as fine-tuning (Manning et al., 2022). +The specific architecture of a transformer model affects the type of pre-training, +and favourable use cases. In the following, three different but very influential +transformer architectures will be discussed. BERT can be seen as stacked +encoders (Devlin et al., 2018b), T5 aims to combine the good parts of encoders +and decoders (Raffel et al., 2019a), while GPT are stacked decoders (Brown +et al., 2020). +2.1.6.1 +BERT +Transfer learning led to state-of-the-art results in natural language processing. +One of the architectures that led the way was BERT, which stands for Bidi- +rectional Encoder Representations from Transformers. It receives bidirectional +context, which is why it is not a natural fit for language modelling. To train it +on this objective regardless, masked language modelling was proposed. The +main idea is to cover up a fraction of the input words and let the model +predict them. In this way, the LM objective can be used while sustaining +connections to words in the future. The masked LM for BERT randomly +predicts 15% of all word tokens in each sequence. Of those, 80% are replaced +by the MASK token, 10% by a random token, and 10% remain unchanged. +Moreover, because the masked words are not even seen in the fine-tuning +phase, the model cannot get complacent and relies on strong representations +of non-masked words. Initially, BERT had an additional objective of +whether one sentence follows another, which is known as next sentence predic- +tion. However, it was dropped in later work due to having an insignificant effect. + +Pretrained jointly +... the movie was ..28 +2 Introducing the modalities +BERT is hugely versatile and was greatly popular after its release. Fine-tuning +BERT led to outstanding results on a variety of applications, including question +answering, sentiment analysis and text summarization. Thanks to its design, +if the task involves generating sequences, pretrained decoders outperform +pretrained encoders like BERT. Even though, it would not be recommended +for autoregressive generation, up to this day, "small" models like BERT are +applied as general tools for numerous tasks. +2.1.6.2 +T5 +The Text-To-Text Transfer Transformer (T5) is a new model that can be +regarded as an application of the insights gathered by an extensive empirical +study searching for the best transfer learning techniques. It is pretrained on +Colossal Clean Crawled Corpus (C4), an open-source dataset. Raffel et al. +(2019a) found that the best pretraining objective to use for the encoder +component was span corruption. In short, different length word groups (spans) +are replaced with unique placeholders, and let the model decode them. Text +preprocessing is necessary for its implementation. For the decoder, it is still +a language modelling task. Compared to models like BERT, which can only +output a span of the input or a class label, T5 reframes all NLP tasks into a +unified text-to-text format, where inputs and outputs always consist of text +strings. As a result, the same model, loss function, and hyperparameters can be +used on any NLP task, such as machine translation, document summarization, +question answering, and classification tasks like sentiment analysis. T5 can even +be applied to regression tasks by training it to predict the string representation +of a number (and not the number itself). Examples of potential use cases are +depicted in below. +FIGURE 2.20: Applications of T5 model (Source: Raffel et al. (2019a)). +2.1.6.3 +GPT-3 +As previously stated, the neural architecture influences the type of pretraining. +The original GPT architecture consists of a Transformer decoder with 12 layers +(Radford et al., 2018). For decoders, it is sensible to simply pretrain them as +language models. Afterwards, they can be used as generators to fine-tune their +probability of predicting the next word conditioned on the previous words. + +translate English to German: That is good. +"cola sentence: The +"Das ist gut. +course is jumping well. +T5 +'not acceptable' +'stsb sentence1: The rhino grazed +on the grass. sentence2: A rhino +is grazing in a field." +"3.8" +"summarize: state authorities +"six people hospitalized after +dispatched emergency crews tuesday to +a storm in attala county. +survey the damage after an onslaught +of severe weather in mississippi...2.1 State-of-the-art in NLP +29 +The models are suitable for tasks similar to the training, including any type +of dialogue and document summarization. Transformer language models are +great for transfer learning. They are fine-tuned by randomly initializing a +softmax classifier on top of the pretrained model and training both (with only +a very small learning rate and a small number of epochs) so that the gradient +propagates through the whole network. +The success of BERT in 2018 prompted a "gold rush" in NLP, in which ever +greater language models were created. One that topped the headlines and +used a customer supercluster for computation was the third iteration of the +GPT architecture by OpenAI, known as GPT-3. reveals why GPT-3 is a +famous example of current research focusing on scaling up neural language +models. While the largest T5 model has 11 billion parameters, GPT-3 has +175 billion parameters. Moreover, the training data set contains around 500 +billion tokens of text, while the average young american child hears around 6 +million words per year (Hart and Risley, 1995). The results of huge language +models suggest that they perform some form of learning (without gradient +steps) simply from examples provided via context. The tasks are specified by +the in-context examples, and the conditional probability distribution simulates +performing the task to an extent. +FIGURE 2.21: Comparison of number of parameters between Transformer- +architectures (Source: Saifee (2020)). +2.1.7 +Current Topics +2.1.7.1 +Concerns regarding growing size of Language Models +As the last chapter ended with GPT-3 and emphasized the concerning trend +of ever larger language models, one could ask which other costs arise from +the developments. Risks and harms among environmental and financial costs +have been studied by Bender et al. (2021). They state that marginalized + +180Chart Area +160 +140 +Parameters +120 +100 +80 +60 +# +40 +20 +0 +BERT +RoBERTa +GPT-2 +T5 +Turing NLG +GPT-3 +Model30 +2 Introducing the modalities +communities are not only less likely to benefit from LM progress, but also more +likely to suffer from the environmental repercussions of increasing resource +consumption. Strubell et al. (2019a) estimated that training a Transformer +(big) model resulted in 249t of CO2. To compare, an average human is +responsible for approximately 5t of CO2 per year (Ritchie et al., 2020). In +addition, they discovered that an estimated increase of 0.1 in BLEU score +increased computation costs by $ 150,000 (for English to German translations). +Furthermore, larger models require more data to sufficiently train them. This +has resulted in large but poorly documented training data sets. Multiple risks +can be mitigated if there is a common understanding of the model’s learnings. +Moreover, it has been argued that datasets consisting of web data over-represent +hegemonic views and encode bias towards marginalized communities. This +is among other factors due to internet access being unevenly distributed. In +particular, there is an over-representation of younger internet users and those +from developed countries. It is generally naive to educate AI systems on all +aspects of the complex world, and hope for the beautiful to prevail (Bender +et al., 2021). +2.1.7.2 +Improving Understanding of Transformer-based models +The results of transformer-based models clearly show that they deliver +successful results. However, it is less clear why. The size of the models +makes it difficult to experiment with them. Nevertheless, having a limited +understanding restrains researchers from coming up with further improvements. +Therefore, multiple papers analysed BERT’s attention in search of an improved +understanding of large transformer models. BERT is a smaller model out of +the more popular ones, and its attention is naturally interpretable because the +attention weight indicates how significant a word is for the next representation +of the current word (Clark et al., 2019). In the following, some of the findings +are going to be shared. +BERT representations are rather hierarchical than linear, and they include +information about parts of speech, syntactic chunks and roles (Lin et al., 2019, +Liu et al. (2019a)) Furthermore, it has semantic knowledge. For example, +BERT can recognize e.g. that "to tip a chef" is better than "to tip a robin" +but worse than "to tip a waiter" ((Ettinger, 2019)). However, it makes sense +that BERT has issues with knowledge that is assumed and not mentioned, +which especially refers to visual and perceptual properties (Da and Kasai, +2019). Additionally, BERT struggles with inferences, e.g. even though it is +known that "people walk into houses" and "houses are big", it cannot infer +that "houses are bigger than people" (Forbes et al., 2019). +While it is true that different transformer heads attend to various patterns (see + +2.1 State-of-the-art in NLP +31 +), interestingly, most of them could be neglected without notable performance +loss (Voita et al., 2019). Probing attention maps can be tedious, but allows to +gain knowledge of common patterns, such as an unexpected amount focusing +on the delimiter token +SEP +. +FIGURE 2.22: Common patterns of attention heads (Source: Clark et al. +(2019)). +2.1.7.3 +Few-Shot Learning +For NLP tasks, the model is usually trained on a set of labelled examples and +is expected to generalize to unseen data. Annotating is not only costly but also +difficult to gather for numerous languages, domains, and tasks. In practice, +there is often only a very limited amount of labelled examples. Consequently, +few-shot learning is a highly relevant research area (Schick and Schütze, 2020). +It defines a model that is trained on a limited number of demonstrations to +guide its predictions. Referring back to , the benefits of lower computational +and environmental costs have to be mentioned. +Traditional fine-tuning uses a large corpus of example tasks, and the model +is updated repeatedly with gradient steps so that it adapts to the task with +minimal accuracy error. +In contrast, few-shot applications have to complete tasks at test time with only +forward passes. They have three main parts: the task description, examples, +and the prompt. In Figure ??, the task is a translation from English to French, +a few examples, as well as the word that should be translated are given. +Moreover, zero-shot and one-shot learning refer to the model predicting with +no and one learned example, respectively (Brown et al., 2020). +It is complicated to create the few-shot examples, since the application relies +on them to express the task. This is why smaller models are susceptible to + +Head 1-1 +Head 3-1 +Head 8-7 +Head 11-6 +Attends broadly +Attends to next token +Attends to [SEP] +Attends to periods +.. +.. +found +found +found +found +found +found +found +found +in +in +in- +in +in +in +in +in +taiwan +taiwan +taiwan- +taiwan +taiwan +taiwan +taiwan +taiwan +[SEP] +[SEP] +[SEP] +[SEP] +[SEP] +[SEP] +[SEP] +[SEP] +the +the +the +the +the +the +the +the +wingspan +>wingspan +wingspan- +wingspan +wingspan +wingspan +wingspan +wingspan +is +is +is- +is +is +is +is. +is +24 +24 +24. +24 +24. +24 +24 +24 +28 +28 +28. +28 +28 +28 +28. +28 +mm +mm +mm: +mm +mm +mm +mm +mm +[SEP] +[SEP] +[SEP] +[SEP] +[SEP] +[SEP] +[SEP] +[SEP]32 +2 Introducing the modalities +FIGURE 2.23: Few-shot learning (Source: Brown et al. (2020)). +examples written unfavourably. In Brown et al. (2020), it was shown that +few-shot performance scales with the number of model parameters. Even +though GPT-3’s in-context learning improved few-shot prompting capabilities, +it is still sensitive to the order of training examples, decoding strategy, and +hyperparameter selection. All of this combined with the fact that current +research uses larger or held-out data sets leads to the suspicion that the true +few-shot ability of language models is overestimated (Perez et al., 2021a). +Moreover, Lialin et al. (2022) have found that common transformer models +could not resolve compositional questions in a zero-shot fashion and that the +model’s parameter count does not correlate with performance. This indicates +a limitation for zero-shot prompting with the existing pre-training objectives. +However, different models provided the best accuracy with regard to different +symbolic reasoning tasks. This suggests that optimization or masking strate- +gies could be more significant than the pre-training, data set size or model +architecture. +2.1.8 +Summary +Natural Language Processing has been one of the most exciting fields of machine +learning in the last decade considering all the breakthroughs discussed in this +work. Word embeddings made it possible and allowed developers to encode +words as dense vectors that capture their underlying semantic content. In this +way, similar words are embedded close to each other in a lower-dimensional +feature space. Another important challenge was solved by encoder-decoder +(also called sequence-to-sequence) architectures, which made it possible to map +input sequences to output sequences of different lengths. They are especially +useful for complex tasks like machine translation, video captioning or question +answering. A significant state-of-the-art technique is attention, which enabled +models to actively shift their focus – just like humans do. It allows following +one thought at a time while suppressing information irrelevant to the task. +As a consequence, it has been shown to significantly improve performance for +tasks like machine translation. By giving the decoder access to directly look +at the source, the bottleneck is avoided and at the same time, it provides a +shortcut to faraway states and thus helps with the vanishing gradient problem. + +Translate English to French: +task description +sea otter => loutre de mer +examples +3 +peppermint => menthe poivrée +plush girafe => girafe peluche +5 +cheese => +prompt2.2 State-of-the-art in Computer Vision +33 +One of the most recent data modelling techniques is the transformer, which +is solely based on attention and does not have to process the input data +sequentially. Therefore, the deep learning model is better in remembering +context-induced earlier in long sequences. It is the dominant paradigm in +NLP currently and makes better use of GPUs because it can perform parallel +operations. Transformer architectures like BERT, T5 or GPT-3 are pre-trained +on a large corpus and can be fine-tuned for specific language tasks. They can +generate stories, poems, code and much more. Currently, there seems to be +breaking transformer news nearly every week with no sign of slowing. This is +why many trends could be recognized as relevant current topics. One of them +is increasing concerns regarding the growing size of language models and the +correlated environmental and financial costs. Another active research aspect is +concerned with improving the understanding of transformer-based models to +further advance them. Additionally, there are many studies about achieving +respectable results on language modelling tasks after only learning from a few +examples, which is known as few-shot learning. +2.2 +State-of-the-art in Computer Vision +Author: Vladana Djakovic +Supervisor: Daniel Schalk +2.2.1 +History +The first research about visual perception comes from neurophysiological +research performed in the 1950s and 1960s on cats. The researchers used +cats as a model to understand how human vision is compounded. Scientists +concluded that human vision is hierarchical and neurons detect simple features +like edges followed by more complex features like shapes and even more complex +visual representations. Inspired by this knowledge, computer scientists focused +on recreating human neurological structures. +At around the same time, as computers became more advanced, computer +scientists worked on imitating human neurons’ behavior and simulating a +hypothetical neural network. In his book “The Organization of Behaviour” +(1949) Donald Hebbian stated that neural pathways strengthen over each +successive use, especially between neurons that tend to fire at the same time, +thus beginning the long journey towards quantifying the complex processes of +the brain. The first Hebbian network, inspired by this neurological research, +was successfully implemented at MIT in 1954 (Jaspreet, 2019). +New findings led to the establishment of the field of artificial intelligence in + +34 +2 Introducing the modalities +1956 on-campus at Dartmouth College. Scientists began to develop ideas and +research how to create techniques that would imitate the human eye. +In 1959 early research on developing neural networks was performed at Stanford +University, where models called “ADALINE” and “MADALINE,” (Multiple +ADAptive LINear Elements) were developed. Those models aimed to recognize +binary patterns and could predict the next bit (his, 2022). +Starting optimism about Computer Vision and neural networks disappeared +after 1969 and the publication of the book “Perceptrons” by Marvin Minsky, +founder of the MIT AI Lab, stated that the single perception approach to +neural networks could not be translated effectively into multi-layered neural +networks. The period that followed was known as AI Winter, which lasted +until 2010, when the technological development of computer and the internet +became widely used. In 2012 breakthroughs in Computer Vision happened at +the ImageNet Large Scale Visual Recognition Challenge (ILSVEC). The team +from the University of Toronto issued a deep neural network called AlexNet +(Krizhevsky et al., 2012a) that changed the field of artificial intelligent and +Computer Vision (CV). AlexNet achieved an error rate of 16.4%. +From then until today, Computer Vision has been one of the fastest developing +fields. Researchers are competing to develop a model that would be the most +similar to the human eye and help humans in their everyday life. In this chapter +the author will describe only a few recent state-of-the-art models. +2.2.2 +Supervised and unsupervised learning +As part of artificial intelligence (AI) and machine learning (ML), there are two +basic approaches: +• supervised learning; +• unsupervised learning. +Supervised learning (Education, 2020a) is used to train algorithms on labeled +datasets that accurately classify data or predict outcomes. With labeled data, +the model can measure its accuracy and learn over time. Among others, we +can distinguish between two common supervised learning problems: +• classification, +• regression. +In unsupervised learning (Education, 2020b), unlabelled datasets are analyzed +and clustered using machine learning algorithms. These algorithms aim to dis- +cover hidden patterns or data groupings without previous human intervention. +The ability to find similarities and differences in information is mainly used +for three main tasks: +• clustering, +• association, + +2.2 State-of-the-art in Computer Vision +35 +• dimensionality reduction. +Solving the problems where the dataset can be both labeled and unlabeled +requires a semi-supervised approach that lies between supervised and unsuper- +vised learning. It is useful when extracting relevant features from complex and +high volume data, i.e., medical images. +Nowadays, a new research topic appeared in the machine learning community, +Self-Supervised Learning. Self-Supervised learning is a process where the model +trains itself to learn one part of the input from another (techslang, 2020). As +a subset of unsupervised learning, it involves machines labeling, categorizing, +and analyzing information independently and drawing conclusions based on +connections and correlations. It can also be considered as an autonomous +form of supervised learning since it does not require human input to label +data. Unlike unsupervised learning, self-supervised learning does not focus on +clustering nor grouping (Shah, 2022). One part of Self-Supervised learning is +contrastive learning, which is used to learn the general features of an unlabeled +dataset identifying similar and dissimilar data points. It is utilized to train the +model to learn about our data without any annotations or labels (Tiu, 2021). +2.2.3 +Scaling networks +Ever since the introduction of AlexNet in 2012, the problem of scaling convo- +lutional neural networks (ConvNet) has become the topic of active research. +ConvNet can be scaled in all three dimensions: depth, width, or image size. +One of the first researches in 2015 showed that network depth is crucial for +image classification. The question whether stacking more layers enables the +network to learn better leads to deep residual networks called ResNet (He et al., +2015), which will be described in this work. Later on, scaling networks by their +depth became the most popular way to improve their performance. The second +solution was to scale ConvNets by their width. Wider networks tend to be +able to capture more fine-grained features and are easier to train (Zagoruyko +and Komodakis, 2016). Lastly, scaling the image’s resolution can improve the +network’s performance. With higher resolution input images, ConvNets could +capture more fine-grained patterns. GPipe (Huang et al., 2018) is one of the +most famous networks created by this technique. The question of possibility of +scaling by all three dimensions was answered by Tan and Le (2019a) in the +work presenting Efficient Net. This network was built by scaling up ConvNets +by all three dimensions and will also be described here. +2.2.4 +Deep residual networks +The deep residual networks, called ResNets (He et al., 2015), were presented as +the answer on the question whether stacking more layers would enable network +to learn better. Until then one obstacle for simply stacking layers was the +problem of vanishing/exploding gradients. It has been primarily addressed by + +36 +2 Introducing the modalities +normalized initialization and intermediate normalization layers. That enabled +networks with tens of layers to start converging for stochastic gradient descent +(SGD) with backpropagation. +Another obstacle was a degradation problem. It occurs when the network +depth increases, followed by saturating and then rapidly decreasing accuracy. +Overfitting is not caused by such degradation, and adding more layers to a +suitably deep model leads to higher training error, which indicates that not all +systems are similarly easy to optimize. +For example, it was suggested to consider a shallower architecture and its +deeper counterpart that adds more layers. One way to avoid the degradation +problem is to create a deeper model, where the auxiliary layers are identity +mappings and other layers are copied from a shallower model. The deeper +model should produce no higher training error than its shallower counterpart. +However, in practice it is not the case and it is hard to find comparably +good constructs or better solutions. The solution to this degradation problem +proposed by them is a deep residual learning framework. +2.2.4.1 +Deep Residual Learning +2.2.4.1.1 +Residual Learning +The idea of residual learning is to replace the approximation of underlying +mapping H (x), which is approximated by a few stacked layers (not necessarily +the entire net), with an approximation of residual function F(x) := H (x) − x. +Here x denotes the inputs to the first of these layers, and it is assumed that +both inputs and outputs have the same dimensions. The original function +changes its form F (x) + x. +A counter-intuitive phenomenon about degradation motivated this reformula- +tion. The new deeper model should not have a more significant training error +when compared to a construction using identity mappings. However, due to +the degradation problem, solvers may have challenges approximating identity +mappings by multiple non-linear layers. Using the residual learning reformu- +lation can drive the weights of the non-linear layers toward zero to approach +identity mappings if they are optimal. Generally, identity mappings are not +optimal, but new reformulations may help to pre-condition the problem. When +an optimal function is closer to an identity mapping than a zero mapping, +finding perturbations concerning an identity mapping should be easier than +learning the function from scratch. +2.2.4.1.2 +Identity Mapping by Shortcuts +Residual learning is adopted to every few stacked layers where a building block +is defined: +y = F (x, {Wi}) + x +(2.1) + +2.2 State-of-the-art in Computer Vision +37 +x and y present the input and output vectors of the layers. Figure 2.24 visualizes +the building block. +FIGURE 2.24: Building block of residual learning (He et al., 2015). +The function F (x, {Wi}) represents the residual mapping that is to be learned. +For the example with two layers from Figure 2.24, F = W2σ (W1x) in which +σ denotes the ReLU activation function. Biases are left out to simplify the +notation. The operation F + x is conducted with a shortcut connection and +element-wise addition. Afterward, a second non-linear (i.e., σ (y) transformation +is applied. +The shortcut connections in Equation (2.1) neither adds an extra parameter nor +increases computation complexity and enables a comparisons between plain and +residual networks that concurrently have the same number of parameters, depth, +width, and computational cost (except for the negligible element-wise addition). +The dimensions of x and F in Equation (2.1) must be equal. Alternatively, to +match the dimensions, linear projection Ws by the shortcut connections can +be applied: +y = F (x, {Wi}) + Wsx. +(2.2) +The square matrix Ws can be used in Equation (2.2). However, experiments +showed that identity mapping is enough to solve the degradation problem. +Therefore, Ws only aims to match the dimensions. Although more levels are +possible, it was experimented with function F having two or three layers +without stating the exact form of it. Assuming F only has one layer (Equation +(2.1)) it is comparable to a linear layer: y = W1x+x. The theoretical notations +are about fully-connected layers, but convolutional layers were used. The +function F (x, {Wi}) can be applied to represent multiple convolutional layers. +Two feature maps are added element-wise, channel by channel. +2.2.4.2 +Network Architectures +Various plain/residual networks were tested to construct an efficient residual +network. They trained the network on benchmarked datasets, e.g. the ImageNet +dataset, that are used for a comparison of network architectures. Figure (2.2) +shows that every residual network needs a plain baseline network inspired by +the VGG (Simonyan and Zisserman, 2014) network on which identity mapping +by shortcuts is applied. + +x +weight layer +F(x) +I relu +x +weight layer +identity +F(x) +x ++relu38 +2 Introducing the modalities +Plain Network: The philosophy of VGG nets 41 mainly inspires the plain +baselines. Two rules convolution layers, which usually have 3 × 3 filters, follow +are: +• feature maps with the same output size have the same number of layers; +• reducing the size of a feature map by half doubles the number of filters per +layer to maintain time complexity per layer. +Convolutional layers with a stride of 2 perform downsampling directly. A global +average pooling layer and a 1000-way fully-connected layer with softmax are at +the end of the network. The number of weighted layers sums up to 34 (Figure +2.25, middle). Compared to VGG nets, this model has fewer filters and lower +complexity (Figure 2.25, left). +Residual Network: Based on the above plain network, additional shortcut +connections (Figure 2.25, right) turn the network into its associate residual +variant. The identity shortcuts (Equation (2.1)) can be directly used in the +case of the exact dimensions of the input and output (solid line shortcuts in +Figure 2.25). For the different dimensions (dotted line shortcuts in Figure +2.25), two options are considered: +• The shortcut still performs identity mapping, but with extra zero entries +padded to cope with the increasing dimensions, without adding new parame- +ters; +• The projection shortcut in Equation (2.2) matches dimensions (due to 1 × 1 +convolutions). +In both cases, shortcuts will be done with a stride of two when they go across +feature maps of two sizes. +2.2.5 +EfficientNet +Until Tan and Le (2019b) introduced EfficientNet, it was popular to scale only +one of the three dimensions – depth, width, or image size. The empirical study +shows that it is critical to balance all network dimensions, which can be achieved +by simply scaling each with a constant ratio. Based on this observation, a simple +yet effective compound scaling method was proposed, which uniformly scales +network width, depth, and resolution with a set of fixed scaling coefficients. For +example, if 2N times more computational resources are available, increasing the +network depth by αN, width by βN, and image size by γN would be possible. +Here α, β, γ are constant coefficients determined by a small grid search on the +original miniature model. Figure 2.26 illustrates the difference between this +scaling method and conventional methods. A compound scaling method makes +sense if an input image is bigger because a larger receptive field requires more +layers and more significant channel features to capture fine-grained patterns. +Theoretically and empirically, there has been a special relationship between + +2.2 State-of-the-art in Computer Vision +39 +FIGURE 2.25: Architecture of ResNet (He et al., 2015). + +VGG-19 +34-layer plain +34-layer residual +image +image +andano +size: 224 +3x3 conv, 64 +3x3 conv, 64 +output +pool./2 +size: 112 +3x3 conv, 128 +3x3 conv, 128 +7x7 conv, 64, /2 +7x7 conv, 64, /2 +pool, /2 +pool, /2 +pool. /2 +output +size: 56 +3x3 conv, 256 +3x3 conv, 64 +3x3 conv, 64 +4 +3x3 conv, 256 +3x3 conv, 64 +3x3 conv, 64 +3x3 conv, 256 +3x3 conv, 64 +3x3 conv, 64 +3x3 conv, 256 +3x3 conv, 64 +3x3 conv, 64 +3x3 conv, 64 +3x3 conv, 64 +3x3 conv, 64 +3x3 conv, 64 +. +pool, /2 +andno +3x3 conv, 128, /2 +3x3 conv, 128, /2 +4 +size: 28 +3x3 conv, 512 +3x3 conv, 128 +3x3 conv, 128 +...... +3x3 conv, 512 +3x3 conv, 128 +3x3 conv, 128 +3x3 conv, 512 +3x3 conv, 128 +3x3 conv, 128 ++ +3x3 conv, 512 +3x3 conv, 128 +3x3 conv, 128 +3x3 conv, 128 +3x3 conv, 128 +3x3 conv, 128 +3x3 conv, 128 +3x3 conv, 128 +3x3 conv, 128 +4 +andno +size: 14 +pool, /2 +3x3 conv, 256, /2 +3x3 conv, 256, /2 +3x3 conv, 512 +3x3 conv, 256 +3x3 conv, 256 +4 +3x3 conv, 512 +3x3 conv, 256 +3x3 conv, 256 +本 +本 +3x3 conv, 512 +3x3 conv, 256 +3x3 conv, 256 +3x3 conv, 512 +3x3 conv, 256 +3x3 conv, 256 +4 +3x3 conv, 256 +3x3 conv, 256 +3x3 conv, 256 +3x3 conv, 256 +4 +3x3 conv, 256 +3x3 conv, 256 +4 +3x3 conv, 256 +3x3 conv, 256 +3x3 conv, 256 +3x3 conv, 256 +3x3 conv, 256 +3x3 conv, 256 +4 +3x3 conv, 256 +3x3 conv, 256 +output +4 +size: 7 +pool, /2 +3x3 conv, 512, /2 +3x3 conv, 512, /2 +3x3 conv, 512 +3x3 conv, 512 +3x3 conv, 512 +3x3 conv, 512 +3x3 conv, 512 +3x3 conv, 512 +3x3 conv, 512 +3x3 conv, 512 +3x3 conv, 512 +3x3 conv, 512 +4 +andno +size: 1 +fc 4096 +avg pool +fc 4096 +fc 1000 +fc 1000 +fc 100040 +2 Introducing the modalities +network width and depth (Raghu et al., 2016). Existing MobileNets (Howard +et al., 2017) and ResNets are used to demonstrated new scaling methods. +FIGURE 2.26: Model scaling (Tan and Le, 2019b). +2.2.5.1 +Compound Model Scaling +2.2.5.1.1 +Problem Formulation +A function Yi = Fi (Xi) with the operator Fi, output tensor Yi, input tensor +Xi of shape (Hi, Wi, Ci), spatial dimensions Hi, Wi, and channel dimension +Ci is called a ConvNet Layer i. A ConvNet N appears as a list of composing +layers: +N = F∥ ⊙ · · · F∈ ⊙ F∞ (X1) = +� +j = 1 · · · kF| (X1) +Effectively, these layers are often partitioned into multiple stages and all layers +in each stage share the same architecture. For example, ResNet has five stages +with all layers in every stage being the same convolutional type except for the +first layer that performs down-sampling. Therefore, a ConvNet can be defined +as: +N = +� +i=1···s +F⟩ +Li � +X(Hi,Wi,Ci) +� +where F⟩ +Li denotes layer F⟩ which is repeated Li times in stage i, and +(Hi, Wi, Ci) is the shape of input tensor X of layer i. +In comparison to the regular ConvNet focusing on the best layer architecture +search F⟩, model scaling centers on the expansion of the network length (Li), +width (Ci), and/or resolution (Hi, Wi) without changing F⟩ that was predefined +in the baseline network. Although model scaling simplifies the design problem +of the new resource constraints through fixing F⟩, a different large design space +(Li, Hi, Wi, Ci) for each layer remains to be explored. To further reduce the +design space, all layers are restricted to be scaled uniformly with a constant + +-wider +#channels +wider +deeper +deeper +layer_i +higher +higher +7 resolution Hxw +resolution +.resolution +(b) width scaling +(a) baseline +(c) depth scaling +(d) resolution scaling +(e) compound scaling2.2 State-of-the-art in Computer Vision +41 +ratio. In this case, the goal is to maximize the model’s accuracy for any given +resource constraint, which is presented as an optimization problem: +max +d,w,rAccuracy (N (d, w, r)) +s.t.N (d, w, r) = +� +I=1...s +ˆFid· ˆ +Li � +X⟨r· ˆ +Hi,r· ˆ +Wi,w· ˆ +Ci⟩ +� +Memory (N) ≤ targetMemory +FLOPS (N) ≤ targetFlops +where w, d, r are coefficients for scaling network width, depth, and resolution; +� +�Fi, �Li, �Hi, � +Wi, �Ci +� +are predefined parameters of the baseline network. +2.2.5.1.2 +Scaling Dimensions +The main difficulty of this optimization problem is that the optimal d, w, r +depend on each other and the values are changing under different resource +constraints. Due to this difficulty, conventional methods mostly scale ConvNets +in one of these dimensions: +Depth (d): One of the most significant networks previously described is the +ResNet. As it was described, the problem of ResNets is that accuracy gain of a +very deep network diminishes. For example, ResNet-1000 has similar accuracy +to ResNet-101 even though it contains many more layers. +Width (w): Scaling network width is commonly used for small-sized models. +However, wide but shallow networks tend to have difficulty grasping higher-level +features. +Resolution (r): Starting from 224×224 in early ConvNets, modern ConvNets +tend to use 299 × 299 or 331 × 331 for better accuracy. GPipe (Huang et al., +2018) recently achieved state-of-the-art ImageNet accuracy with 480 × 480 +resolution. Higher resolutions, such as 600 × 600, are also widely used in +ConvNets for object detection. +The above analyses lead to the first observation: +Observation 1: Scaling up any network width, depth, or resolution dimension +improves accuracy. Without the upscaling, the gain diminishes for bigger +models. +2.2.5.1.3 +Compound Scaling +Firstly, it was observed that different scaling dimensions are not independent +because higher resolution images also require to increase the network depth. +The larger receptive fields can help capture similar features that include more +pixels in bigger images. Similarly, network width should be increased when +the resolution is higher to capture more fine-grained patterns. The intuition + +42 +2 Introducing the modalities +suggests that different scaling dimensions should be coordinated and balanced +rather than conventional scaling in single dimensions. To confirm this thought, +results of networks with width w without changing depth (d=1.0) and resolution +(r=1.0) were compared with deeper (d=2.0) and higher resolution (r=2.0) +networks. This showed that width scaling achieves much better accuracy under +the same FLOPS. These results lead to the second observation: +Observation 2: To achieve better accuracy and efficiency, balancing the +network width, depth, and resolution dimensions during ConvNet scaling is +critical. Earlier researches have tried to arbitrarily balance network width and +depth, but they all require tedious manual tuning. +A new compound scaling method, which uses a compound coefficient ϕ to +uniformly scale network width, depth, and resolution in a principled way was +proposed: +depth:⌈ = αϕ +width:⊒ = βϕ +resolution:∇ = γϕ +s.t.α · β2 · γ2 ≈ 2 +α ≥ 1, β ≥ 1, γ ≥ 1 +(2.3) +where α, β, γ are constants that can be determined by a small grid search, +ϕ is a user-specified coefficient that controls how many more resources are +available for model scaling, while α, β, γ specify how to assign these extra +resources to the network width, depth, and resolution, respectively. Notably, +the FLOPS of a regular convolution operation is proportional to d, w2, r2, i.e., +doubling network depth will double the FLOPS, but doubling network width or +resolution will increase the FLOPS by four times. Scaling a ConvNet following +Equation (2.3) will approximately increase the total number of FLOPS by +� +α · β2 · γ2�ϕ. In this chapter, α · β2 · γ2 ≈ 2 is constrained such that for any +new ϕ the total number of FLOPS will approximately increase by 2ϕ. +2.2.5.2 +EfficientNet Architecture +A good baseline network is essential because model scaling does not affect its +layer operators F ∗ [i]. Therefore this method is also estimated on ConvNets. +A new mobile-sized baseline called EfficientNet was developed to show the +effectiveness of the new scaling method. Metrics that were used to estimate +the efficacy are accuracy and FLOPS. The baseline efficient network that was +created is named EfficientNet-B0. Afterwards, this compound scaling method +is applied in two steps: +• STEP 1: By fixing ϕ = 1 and, assuming twice more resources available, a +small grid search of α, β, γ based on Equation (2.3) showed that the best + +2.2 State-of-the-art in Computer Vision +43 +values for EfficientNet-B0 are α = 1.2, β = 1.1, γ = 1.15 under the constraint +of α · β2 · γ2 ≈ 2. +• STEP 2: Afterwards, fix α, β, γ as constants and scale up the baseline +network with different ϕ using Equation (2.3) to construct EfficientNet-B1 +to B7. +Name +Number of parameters +EfficientNet-B0 +5.3M parameters +EfficientNet-B1 +7.8M parameters +EfficientNet-B2 +9.2M parameters +EfficientNet-B3 +12M parameters +EfficientNet-B4 +19M parameters +EfficientNet-B5 +30M parameters +EfficientNet-B6 +43M parameters +EfficientNet-B7 +66M parameters +Indeed, even better performance is achievable by searching for α, β, γ directly +around a large model, but the search cost becomes prohibitively more expensive +on larger models. This method searches once on a small baseline network, then +scales the coefficient for all other models. +2.2.5.3 +Results and comparison of the networks +To demonstrate the performance of both networks, ResNet and EfficientNets +were trained and evaluated on the ImageNet 2012 classification dataset con- +sisting out of 1000 classes. Since deeper scaling should provide better results +in the case of ResNet, it was trained with increased depth each time. First +meaningful results were obtained in ResNet-34, which performed 3.5 % better +than plain-34 baseline when top-1 accuracy is compared. They also compared +three versions of ResNet: (A) zero-padding shortcuts (increasing dimensions, +all shortcuts are parameter-free) (B) projection shortcuts (increasing dimen- +sions, other shortcuts are identity), and (C) all shortcuts are projections. Each +version improved both, the top-1 and top-5 accuracy. Afterward, the depth of +the network was increased and ResNet-50, ResNet-101, and ResNet-152 were +created. Each increase in depth leads to higher accuracy. In deeper models, the +trade-off between accuracy increase and deeper model is not worth describing. +All results are shown in the following table: +Model +top-1 acc. +top-5 acc. +VGG-16 +71.93 +90.67 +GoogLeNet +- +90.85 +plain-34 +71.46 +89.98 +ResNet-34 A +74.97 +92.24 + +44 +2 Introducing the modalities +Model +top-1 acc. +top-5 acc. +ResNet-34 B +75.48 +92.54 +ResNet-34 C +75.81 +92.6 +ResNet-50 +77.15 +93.29 +ResNet-101 +78.25 +93.95 +ResNet-152 +78.57 +94.29 +In the case of EfficientNets, the results achieved by the previous state-of-the-art +networks on the same ImageNet dataset were aimed to improve. Among all state- +of-the-art networks, EfficientNets were compared with ResNets-50 and ResNet- +152. They compared the results of networks deviated by changing scaling +parameters EfficientNet-B0 to EfficientNet-B7. The results of each network +were better than the previous one. Also, they have shown that EfficientNet-B0 +outperforms ResNet-50 and that EfficientNet-B1 outperforms ResNet-152. This +means that scaling through all three dimensions can provide better results +than scaling through just one dimension. The drawback of this approach is the +computational power which makes it less popular than the previous methods. +Again, all results are shown in the following table: +Model +top-1 acc. +top-5 acc. +EfficientNet-B0 / ResNet-50 +77.1 / 76 +93.3 / 93 +EfficientNet-B1 / ResNet-152 +79.1 / 77.8 +94.4 / 93.8 +EfficientNet-B2 +80.1 +94.9 +EfficientNet-B3 / ResNeXt-101 +81.6 / 80.9 +95.7 / 95.6 +EfficientNet-B4 +82.9 +96.4 +EfficientNet-B5 +83.6 +96.7 +EfficientNet-B6 +84 +96.8 +EfficientNet-B7 / GPipe +84.3 / 84.3 +97 / 97 +2.2.6 +Contrastive learning +In recent years the problem of classification of unlabeled dataset is becoming +more widespread. More unlabeled datasets requiring human labeling are created +in fields like medicine, the automotive industry, military, etc. Since the process is +expensive and time-consuming, researchers assumed it could be automated with +contrastive learning frameworks. One of the first and most known contrastive +learning frameworks is SimCLR (Chen et al., 2020a). The advantage of this +framework is its simplicity, yet it achieves high accuracy on classification tasks. +The main idea is to have two copies of the image, which are then used to train +two networks and that are compared. The problem with this framework is that +it doubles the size of the dataset and reaches among all images, which can +be computationally infeasible for large datasets. Bootstrap Your Own Latent + +2.2 State-of-the-art in Computer Vision +45 +(Grill et al., 2020b) was introduced to avoid making double-sized datasets. +The idea was to bootstrap image representations to avoid unnecessary image +comparisons. These two frameworks will be described in this chapter. +Further improvements in the choice of creating two views of images and +comparison techniques were presented in different frameworks such as Nearest- +Neighbor Contrastive Learning (NNCLR) (Dwibedi et al., 2021), Open World +Object Detection (ORE) (Joseph et al., 2021), Swapping Assignments between +multiple Views (SwAV) {Caron et al. (2020)}, and many more. This field is +a constant research topic and new improved frameworks are proposed on a +constant basis to help researchers solve different tasks that requires labeled +datasets. +2.2.6.1 +A Simple Framework for Contrastive Learning of Visual +Representations +Chen et al. (2020a) intended to analyze and describe a better approach +to learning visual representations without human supervision. They have +introduced a simple framework for contrastive learning of visual representations +called SimCLR. As they claim, SimCLR outperforms previous work, is more +straightforward, and does not require a memory bank. +Intending to understand what qualifies good contrastive representation learning, +the significant components of the framework were studied and resulted in: +• A contrastive prediction task requires combining multiple data augmen- +tation operations, which results in effective representations. Unsupervised +contrastive learning benefits from more significant data augmentation. +• The quality of the learned representations can be substantially improved by +introducing a learn-able non-linear transformation between the representation +and the contrastive loss. +• Representation learning with contrastive cross-entropy loss can be improved +by normalizing embeddings and adjusting the temperature parameter appro- +priately. +• Unlike its supervised counterpart, contrastive learning benefits from larger +batch sizes and extended training periods. Contrastive learning also benefits +from deeper and broader networks, just as supervised learning does. +2.2.6.2 +The Contrastive Learning Framework +Like for SimCLR, a contrastive loss is used to learn a representation by +maximizing the agreement between various augmented views of the same data +example. This framework contains four significant components, which are +shown in Figure 2.27: +1. +A stochastic data augmentation module +2. +A neural network base encoder + +46 +2 Introducing the modalities +3. +A small neural network projection head +4. +A contrastive loss function +FIGURE 2.27: A simple framework for contrastive learning of visual repre- +sentations (Chen et al., 2020a). +2.2.6.2.1 +Stochastic data augmentation module +First, the minibatch of N examples is sampled randomly, and the contrastive +prediction task is defined on pairs of augmented examples, resulting in 2N +data points. A memory bank was not used to train the model, instead, the +training batch size varies from 256 to 8192. Any given data example randomly +returns two correlated views of the same example, denoted ˜xi and ˜xj, which is +known as a positive pair. Negative pairs are all other 2(N − 1) pairs. In +one view, some data augmentation techniques are applied. Data augmentation +is widely embraced in supervised and unsupervised representation learning. +Unfortunately, it has not been used to define the contrastive prediction task, +which is mainly determined by changing the architecture. It was shown that +choosing different data augmentation techniques can reduce the complexity of +previous contrastive learning frameworks. There are many data augmentation +operations, the focus was on the most common ones, which are: +• Spatial geometric transformation: cropping and resizing (with horizontal +flipping), rotation and cutout, +• Appearance transformation: color distortion (including color dropping), +brightness, contrast, saturation, Gaussian blur, and Sobel filtering. +FIGURE 2.28: Augmentation texhniques (Chen et al., 2020a). + +Maximize agreement +Zi +g() +g(-) +hi +← Representation → +hi +f(.) +f(.) +ci +i(a) Original +(b) Crop and resize ( +(c) Crop, resize (and flip) (d) Color distort. (drop) (e) Color distort. (jitter +(f)Rotate (90°,180°,270°) +(g) Cutout +(h) Gaussian noise +(i) Gaussian blur +(i) Sobel filtering2.2 State-of-the-art in Computer Vision +47 +Due to the image sizes in the ImageNet dataset, all images were always ran- +domly cropped and resized to the same resolution. Later on, other targeted +data augmentation transformations were applied to one branch, remaining the +one as original i.e. t (xi) = xi. Applying just individual transformation is insuf- +ficient for the model to learn good representations. The model’s performance +improves after composing augmentations, although the contrastive prediction +task becomes more complex. The composition of augmentations that stood +out were random cropping and random color distortion. +It was also observed that stronger color augmentation significantly improves +the linear evaluation of unsupervised learned models. Stronger color augmen- +tations do not enhance the performance of supervised learning models when +trained with the same augmentations. Based on the experiments, unsuper- +vised contrastive learning benefits from stronger color data augmentation than +supervised learning. +2.2.6.2.2 +Neural network base encoder +Neural network based encoder f (·) extracts multiple representation vectors +from the augmented data examples. This framework does not restrict a choice +of the network architecture, although for simplicity, the commonly used ResNet +was picked and gives hi = f (˜xi) = ResNet (˜xi) where hi ∈ Rd is the output +after the average pooling layer. Although increasing depth and width improves +performance, the ResNet-50 was chosen. Furthermore, when the model size +increases, the gap between supervised and unsupervised learning shrinks, +suggesting that bigger models benefit more from unsupervised learning. +2.2.6.2.3 +Small neural network projection head +A small neural network projection head g (·) maps the representation to the +space where the contrastive loss is applied to. The importance of including a +projection head, i.e., g (h) was evaluated and they considered three different +architectures for the head: +1. +identity mapping, +2. +linear projection, +3. +the default non-linear projection with one additional hidden layer +and ReLU activation function. +The results showed that a non-linear projection head is better than a linear +projection and much better than no projection. It improves the representation +quality of the layer that is applied previous to it. They have used a MLP with +one hidden layer to obtain zi = g (hi) = W (2)σ +� +W (1)hi +� +where σ is a ReLU +non-linearity transformation. +This step is performed because defining the contrastive loss on zi instead of on +hi would not lead to a loss of information caused by contrastive loss. Especially, + +48 +2 Introducing the modalities +z = g (h) is trained to be invariant to data transformations. As a result, g +can remove information useful for a downstream task such as object color or +orientation. Using the non-linear transformation g (∗), h can maintain and +form more information. +2.2.6.2.4 +Contrastive loss function +Given a set {˜xik} including a positive pair of examples ˜xi and ˜xj, the contrastive +prediction task aims to identify ˜xi in {˜xi}k̸=i for a given ˜xi. In the case of +positive examples, the loss function is defined as +↕i,j = − log +exp +� +sim(zi,zj) +τ +� +�2N +k=1 I[k̸=i] exp +� +sim(zi,zk) +τ +� +where I[k̸=i] ∈ {0, 1} is an indicator function, τ denotes a temperature pa- +rameter and sim (u,v) = +uT v +∥u∥∥v∥ is a dot product between ↕2 and normalized +u, v. +The final loss is calculated across all positive pairs, both (i, j) and (j, i), in +a mini-batch. It was named NT-Xent, the normalized temperature-scaled +cross-entropy loss. +The NT-Xent loss was compared against other commonly used contrastive loss +functions, such as logistic loss and margin loss. Gradient analysis shows that +l2 normalization, cosine similarity, and temperature together effectively weight +different examples and a suitable temperature can make the model learn from +hard negatives. The advantage of NT-Xent is that it weights the negatives by +their relative hardness. Without normalization and proper temperature scaling +the performance is significantly worse. Also, the contrastive task accuracy is +higher, but the resulting representation is worse under linear evaluation. +2.2.6.3 +Bootstrap Your Own Latent +The fundamental idea of contrastive learning is to create pairs of images +on which the framework would be trained. Creating negative pairs relies on +large batch sizes, memory banks, or customized mining strategies which can +be challenging in larger datasets. Grill et al. (2020b) wanted to create a +new approach that would achieve better performance than other contrastive +methods without using negative pairs. A solution they have introduced is a +method called Bootstrap Your Own Latent (BYOL). The idea was to bootstrap +representations of images. As a result, BYOL is more robust to the choice of +image augmentations. Furthermore, BYOL has two neural networks, called +online and target network, who interact and learn from each other. Using an +augmented view of an image, BYOL trains its online network to predict the +target network’s representation of another augmented view. This approach +achieved state-of-the-art results when trained on the ImageNet dataset under + +2.2 State-of-the-art in Computer Vision +49 +the linear evaluation protocol. Additionally, compared to SimCLR, a strong +contrastive baseline, BYOL suffers from much less performance drop when +only random crops are used to augment images. +2.2.6.3.1 +Description of the method +BYOL aims to learn a representation of yθ. It uses two neural networks: online +and the target network to achieve that. The online network is determined by a +set of weights θ and consists of: +• an encoder fθ, +• a projector gθ, +• a predictor qθ. +FIGURE 2.29: Bootstrap Your Own Latent (Grill et al., 2020b). +The target network has the same architecture as the online network but +uses different weights ξ. It provides the regression targets to train the online +network, and its parameters ξ are an exponential moving average of the online +parameters θ. Precisely, given a target decay rate τ ∈ [0, 1], after each training +step, the following update +ξ ← τξ + (1 − τ)θ +is performed. Firstly, an image is sampled uniformly from D from which two +distributions of image augmentations T and T ′ are created. BYOL applies +respectively two image augmentations t ∼ T and t′ ∼ T ′ creating two aug- +mented views v ≜ t(x) and v′ ≜ t′(x). First augmented view v is used for +the online network and result in the output yθ ≜ fθ(v) and afterwards the +projection zθ ≜ gθ(y). Similarly, from the second augmented view v′ the target +network outputs y′ +ξ ≜ fξ(v′) and the target projection z′ +ξ ≜ gξ(y′). Later on +output a prediction of qθ (zθ) of z′ +ξ and ℓ2-normalize both qθ (zθ) and z′ +ξ to +qθ (zθ) ≜ qθ (zθ) / ∥qθ (zθ)∥2 +and +¯z′ +ξ ≜ z′ +ξ/ +��z′ +ξ +�� +2 . +The predictor is only applied to the online pipeline, making the architecture +asymmetric between the online and target pipeline. Lastly, the following mean + +view +representation +projection +prediction +fe +ge +b +input +image +t +ye +qe(z0) +online +loss ++ +target +ft +9s +sg50 +2 Introducing the modalities +squared error between the normalized predictions and target projections is +defined: +Lθ,ξ ≜ +��qθ (zθ) − ¯z′ +ξ +��2 +2 = 2 − 2 · +� +qθ (zθ) , z′ +ξ +� +∥qθ (zθ)∥2 · +���z′ +ξ +��� +2 +The loss is symmetrized Lθ,ξ by using v′ for the online network and v for the +target network separately to calculate �Lθ,ξ. At each training step, a stochastic +optimization step is applied to minimize LBYOL +θ,ξ += Lθ,ξ + �Lθ,ξ with respect to +θ only but not ξ. BYOL’s dynamics are summarized as +θ ← optimizer +� +θ, ∇θLBYOL +θ,ξ +, η +� +. +where η is a learning rate. At the end of the training, only the encoder fθ is +used. +2.2.6.4 +Comparison of contrastive learning frameworks +Of all frameworks, SimCLR is the most popular due to its simplicity. The +ResNet-50 in 3 different hidden layer widths (width multipliers of 1×, 2×, +and 4×) were used and trained for 1000 epochs each. The accuracy of these +frameworks on the ImageNet dataset with few labels improved when the width +of ResNet-50 increases. For SimCLR with ResNet-50 top-1 accuracy is 69.3 +and top-5 accuracy is 89, while for ResNet-50(4x) top-1 accuracy is 85.8 and +top-5 accuracy is 92.6. These results are comparable with supervised methods. +The BYOL framework was built to improve the results of SimCLR. It was +also stated that the accuracy for the baseline ResNet-50 is 74.3 and 91.6 for +top-1 accuracy and top-5 accuracy. When using ResNet-50(4x), an increase in +accuracy to 78.6 and 94.2 for top-1 and top-5 is observed, respectively. More +information about performance can be found in following table: +Model +Architecture +Param (M) +top-1 acc. +top-5 acc. +SimCLR +ResNet-50 +24 +69.3 +89.0 +SimCLR +ResNet-50 (2x) +94 +74.2 +93.0 +SimCLR +ResNet-50 (4x) +375 +76.5 +93.2 +BYOL +ResNet-50 +24 +74.3 +91.6 +BYOL +ResNet-50 (x2) +94 +77.4 +93.6 +BYOL +ResNet-50 (x4) +375 +78.6 +94.2 +BYOL +ResNet-200 (x2) +250 +79.6 +94.8 + +2.2 State-of-the-art in Computer Vision +51 +2.2.7 +Transformers in Computer Vision +Since the first appearance of the Transformers architecture in 2017 ?, it has +become an irreplaceable part of all-natural language processing (NLP) models. +The main advantage of Transformers is that they can be trained on a large text +corpus and then fine-tuned on a smaller task-specific dataset. This enabled +model training of unspecified size with more than 100B parameters. +However, computer vision still relied on convolutional architectures. With +datasets constantly growing and the diversity of the fields computer vision tasks +could be applied to, researchers wanted to implement Transformers architecture +in the CV field. Some works aim for combining CNN-like architectures with +self-attention (Wang and Li, 2018). Others attempted to replace convolutions +entirely, e.g. Ramachandran et al. (2019). Due to specialized attention patterns, +the problem was that they have not yet been scaled effectively on modern +hardware accelerators. Therefore, in large-scale image recognition, classic +ResNet-like architectures are still state-of-the-art. +In 2021 the Google research Brain Team published the paper “An image is worth +16 × 16 words” where they introduced new Transformers-based architecture +for CV called Vision Transformers (ViT) (Dosovitskiy et al., 2020c). Based +on the success of Transformer in NLP scaling, they aimed to apply standard +Transformer directly to images with little as possible changes to the existing +architecture. The image is split into patches and linear embeddings of these +patches are provided as inputs to the Transformer. These patches are the same +as tokens (e.g. words) in NLP. The model is trained for image classification in +a supervised learning fashion. +2.2.7.1 +Vision Transformers +Brain Team wanted to create simple but universally scalable architecture to +follow the original Transformers architecture. +2.2.7.1.1 +Method +Compared to NLP, with 1-dimensional token embedding input for the Trans- +former, images are 2-dimensional objects. Firstly, images needed to be rep- +resented differently to imitate original architectures as close as possible. For +that reason image x ∈ RH×W ×C is reshaped into a sequence of flattened +2-dimensional patches xp ∈ RN×(P 2·C), where (H, W) is the resolution of the +original image, C is the number of channels, (P, P) is the resolution of each +image patch, and N = HW/P 2 is the resulting number of patches, also the +Transformer’s effective input sequence length. The Transformer input through +all layers is a fixed vector of size D. The first step is to flatten the patches, usu- +ally 16 × 16 and map them to D dimensions with a trainable linear projection +to create patch embeddings. + +52 +2 Introducing the modalities +FIGURE 2.30: Vision Transformer (Dosovitskiy et al., 2020c). +z0 = +� +xclass ; x1 +pE; x2 +pE; · · · ; xN +p E +� ++ Epos, E ∈ R(P 2·C)×D, Epos ∈ R(N+1)×D +To this sequence of “patch embeddings”, a prefix learnable [class] token, like in +BERT, is usually added. This token z0 +0 = xclass tells the model to classify the +image and increases the dimension of vector z. Also, the state of this token +at the output of the Transformer encoder +� +z0 +L +� +, on which the layernorm is +applied, serves as the image representation y. +y = LN +� +z0 +L +� +Furthermore, it is the only one to which the classification head is attached +to during pre-training and fine-tuning. The classification head during pre- +training is compiled of MLP with one hidden layer and a single linear layer at +a fine-tuning time. Position embedding, a standard learnable 1-dimensional +position embedding, are attached to the patch embeddings, serving as input to +the encoder. The standard Transformer encoder consists of alternating layers +of multiheaded self-attention and MLP blocks. After each block, a residual +connection is applied. +z′ +ℓ = MSA (LN (zℓ−1)) + zℓ−1, ℓ = 1 . . . L +zℓ = MLP (LN (z′ +ℓ)) + z′ +ℓ, ℓ = 1 . . . L +Vision Transformer has a significantly lower inductive bias than CNNs in image- +specific information. VIT only has local and translational equivariant MLP +layers, while the self-attention layers are global. A 2-dimensional neighborhood +structure is used sparingly: the image is cut into patches at the beginning + +Vision Transformer (ViT) +Transformer Encoder +Class +Bird +MLP +Ball +Head +Car +MLP +Norm +Transformer Encoder +Patch + Position +Multi-Head +13 +1[5] +118 +Embedding +Attention +*Extra learnable +Linear Projection of Flattened Patches +[class] embedding +Norm +Embedded +Patches2.3 State-of-the-art in Computer Vision +53 +and the position embeddings are resized as needed at the fine-tuning time. +Alternatively, the input sequence can consist of a CNN’s feature map on which +the patch embedding projection is applied. Vision Transformers are pre-trained +on large datasets and fine-tuned to (smaller) downstream tasks. For fine-tuning, +a projection head is removed and a zero-initialized D × K feedforward layer is +attached with K being the number of downstream classes. It is also beneficial +to use higher resolution then in pre-training. Also ViT can handle arbitrary +sequence lengths but the pre-trained position embeddings can become sufficient. +It is necessary to point out that resolution adjustment and patch extraction are +the only points at which an inductive bias about the 2-dimensional structure +of the images is manually injected into the Vision Transformers +2.2.7.1.2 +Experiments +Similarly to BERT models, multiple versions of the model at various scales were +created. They have created Base = “B”, Large = “L”, Huge = “H” versions +of ViT, with 12, 24 and 32 layers and 86M, 307M and 632M parameters +respectively. +To explore the model scalability, the previous mentioned dataset ImageNet +was used. In addition, ViT was compared against a slightly modified ResNet +called “ResNet(BiT)”. The batch Normalization layer was replaced with Group +Normalization and used standardized convolutions. Another network that it +was compared to was Noisy Student (Xie et al., 2019), a large EfficientNet. +Experiments showed that ViT Hughe with 14×14 input patch size outperformed +both CNN-based networks with an accuracy of 88.5%, whereas ResNet BiT had +87.54% and Noisy Student 88.4%. It is worth mentioning that ViT Large with +16 × 16 input patch size had 87.76% accuracy on the same dataset. Another +thing worth pointing out is that ViT outperforms CNN-based architectures +on all larger datasets yet performs slightly worse than CNN networks on a +smaller dataset. +2.2.8 +Conclusion +In this chapter, the authors presented some of the current state-of-the-art +approaches in Computer Vision. Nowadays, when technology is advancing each +day, creating networks that would imitate human brain is more challenging. Still, +the networks presented in this chapter are highly accurate and creating network +which can out-perform them is challenging. Furthermore, it is noticeable that +the application of CV is dictating the development of networks and frameworks +which help humans with their everyday tasks. + +54 +2 Introducing the modalities +2.3 +Resources and Benchmarks for NLP, CV and multi- +modal tasks +Author: Christopher Marquardt +Supervisor: Christian Heumann +When we see athletes perform in their sports we only see the results of their +hard work prior or till to the event. Most of the time they casually talk about +their off-season, but everybody knows the results are made in the off-season. +Same goes for the models we will see in the later chapters. We are just interested +in the results, but why and how does the model come to these results? It has +to learn to some key fundamentals of the modality to achieve these results. But +how do they get them to perform in such a way or even better? It’s possible to +build better architectures and/or use more and new data to achieve this. New +data by hand is easy to get but this new data results in a new problem. New +data has to be carefully labeled by humans, which can be very expensive by +the amount of data. Models which learn from labeled data use the supervised +learning strategy. This learning strategy is a bottleneck for future progress, +because of the given reason. +But the need for labeling the data isn’t the only problem. Let’s visit the athlete +analogy again. Imagine a professional football player has to participate in a +professional ski race. He will not be able to compete with the others, because +they are trained only to do ski races. Here see the other problem. Models +which use supervised learning have shown to perform very well on the task +they are trained to do. This means models which learn on carefully labeled +data only perform very well on this specific task, but poor on others. Also it’s +not possible to label everything in the world. +So the goal is to generate more generalist models which can perform well +on different tasks without the need of huge labeled data. Humans are able +to perform well on different tasks in a short amount of time. Humans, for +example, only need a small amount of hours to learn how to drive a car, +even without supervision. On the other hand fully automated driving AI +need thousand of hours of data to drive a car. Why do humans learn so fast +compared to machines? Humans don’t rely on labeled data, because most +of the time humans learn by observation. By this humans generate a basic +knowledge of how the world works, which also called common sense. This +enables us to learn so much faster compared to machines. Meta AI (Yann and +Ishan, 2021) believes that self-supervised learning is one of the most promising +ways to generate background knowledge and some sort of common sense in AI +systems. By self-supervised learning one means a supervised learning algorithm, +but it doesn’t need an external supervisor. Self-supervised pre-training differs + +2.3 Resources and Benchmarks for NLP, CV and multimodal tasks +55 +between the modalities, which means there is not an approach which works in +all modalities. The following chapter will inspect on the one hand pre-training +resources and the use of them and on the other hand also the benchmarks +which are used for Natural Language Processing (NLP), Computer Vision (CV) +and ,the combination of both, vision language pre-trained models (VL-PTM). +2.3.1 +Datasets +After pointing out that pre-training is very important, one might ask how do +the datasets look and how do the different modalities pre-train? At first we +will inspect the former one and focus afterwards on the use of the resources. +As one might expect NLP models pre-train on text, CV models pre-train on +images and VL-PTM pre-train on text image pairs, which can somehow be +seen as a combination of NLP and CV. But CV models mostly used labeled +data like a picture of a dog with the corresponding single label “dog”. MML +datasets can contain several sentences of text which correspond to the given +image. +Even if the datasets might be completely different, the procedure to get the +data is mostly the same for all of them, because the data is crafted from the +internet. This can lead to a problem, since by using this method the resulting +dataset might be noisy. One approach for the VL-PTM, for example, is to +use CommonCrawl and extract the image plus the alt of an image. The alt is +an alternate text for an image, if the image cannot be displayed or for visual +impaired people. This seems like a reasonable approach, but the alt is often +not very informative about what’s in the image. +Another difference between the modalities is the cardinality of the pre-training +data. It’s easy to realize that text is by far easiest to crawl from the internet. +This results in huge high-quality massive text data. Some magnitudes smaller +are the datasets for CV. Since VL-PTM are pretty new compared to the other +modalities it still relatively small, but growing fast. A small downer is that +some of the datasets are not public available. The big companies like to keep +their models and used datasets private, which hinders the reproducibility, but +there are also real open AI competitors like LAION and Eleuther in the field. +The next chapter will provide some of the most used pre-training datasets. +2.3.1.1 +Natural Language Processing Datasets +2.3.1.1.1 +Common Crawl +As already mentioned, extracting text from the internet is rather easy. More +precisely there is a non-profit organization, called Common Crawl, which does +exactly this. They provide copies of the internet to researchers, companies and +individuals at no cost for the purpose of research and analysis. The Common +Crawl corpus contains petabytes of data collected since 2008. Every month, +Common Crawl releases a snapshot of the web obtained by randomly exploring + +56 +2 Introducing the modalities +and sampling URLs. It contains raw web page data, extracted metadata and +text extractions. The advantages of Common Crawl come along with their +disadvantages. The text is from diverse domains but with varying quality +of data. To handle the raw nature of the datasets one often has to use a +well-designed extraction and filter to use the datasets appropriately (Gao et al., +2020). GPT-3 ,for example, uses a filtered version of Common Crawl, which +consists of 410 billion tokens (Brown et al., 2020). So data for NLP is freely +available but one needs to use well-designed extraction and filtering to really +use the dataset. +2.3.1.1.2 +The Pile +Recent work (Rosset, 2020) showed that diversity in training datasets improves +general cross-domain knowledge and downstream generalization capability +for language models. The Pile (Gao et al., 2020) was introduced to address +exactly these results. The Pile contains 22 sub-datasets, including established +NLP datasets, but also several newly introduced ones. The size of the 22 +sub-datasets, which can be categorized roughly into five categories, pile up to +around 825 GB of data. The following treemap shows the distribution of the +dataset. +While only 13% of the world’s population speaks English, the vast majority of +NLP research is done on English. Gao et al. (2020) followed this trend, but did +not explicitly filtered out other languages when collecting our the data. This +leads to the fact that roughly 95% of the Pile is English. Also EuroParl (Koehn, +2005), a multilingual parallel corpus introduced for machine translation, is +included in the Pile. To train GPT-2 Open AI collected data from WebText. +WebText is an internet dataset created by scraping URLs extracted from + +CompositionofthePilebyCategory +• Academic = Internet - Prose Dialogue = Misc +Bibliotik +Pile-CC +PG-19 +BC2 +PubMedCentral +ArXiv +Subtitles +StackExchange +IRC +EP +PMA +Github +FreeLaw +USPTO +Phil +NIH +OpenWebText2 +Wikipedia +DM Math +HNYT2.3 Resources and Benchmarks for NLP, CV and multimodal tasks +57 +Reddit submissions with a minimum score for quality, but sadly it was never +released to the public. Independent researchers reproduced the pipeline and +released the resulting dataset, called OpenWebTextCorpus (Gokaslan and +Cohen, 2019) (OWT). Eleuther created an enhanced version of the original +OWT Corpus called OpenWebText2. It covers all Reddit submissions from +2005 up until April 2020. It covers content from multiple languages, document +metadata, multiple dataset versions, and open source replication code. +They also explicitly included a dataset of mathematical problems (DeepMind +Mathematics) to improve the mathematical ability of language models trained +on the Pile. An ArXiv dataset was in included in the hopes that it will be +a source of high quality text and math knowledge, and benefit potential +downstream applications to research in these areas and also because arXiv +papers are written in LaTeX. Training a language model to be able to generate +papers written in LaTeX could be a huge benefit to the research community. +Since CC needs further steps, due to the raw nature of CC, to really use is. +Pile-CC is Common Crawl-based dataset, which can be used directly. It yields +higher quality output than directly using the WET files. These were only some +of the 22 included datasets. A more detailed description of the sub-dataset +and the reasons why these were included can be found in the corresponding +paper (Gao et al., 2020). +2.3.1.1.3 +Multilingual Datasets +Another pre-cleaned version of CC is CC-100 (Wenzek et al., 2019). They +present a pipeline to create curated monolingual corpora in more than 100 +languages. A filter, which covers the data based on their distance to Wikipedia, +is used and this improves the quality of the resulting dataset. However, its +English portion is much smaller than the Pile. But a multilingual dataset +might help a low-resource language acquire extra knowledge from other lan- +guages. Perhaps the most multilingual corpus publicly available, containing +30k sentences in over 900 languages, is the Bible corpus (Mayer and Cysouw, +2014). Till now all datasets were freely available and almost directly usable. +The next one is not public available for some reasons. +To provide mT5 (Xue et al., 2020), which is multilingual pre-trained text-to- +text transformer, a suitable pre-training dataset, Google Research designed a +dataset including more than 100 languages. The dataset is called mC4 (Xue +et al., 2020). Since some languages are relatively scarce on the internet, they +used all of the 71 monthly web scrapes released so far by Common Crawl. +It contains 6.6 billion pages and 6.3 trillion tokens. A smaller version of the +mC4 is also used by Google Research. The smaller dataset C4 (Colossal Clean +Common Crawl) was explicitly designed to be English only. The C4 dataset is +a collection of about 750GB of English-language text sourced from the public +Common Crawl web. + +58 +2 Introducing the modalities +Most of the datasets used in NLP are derived entirely from Common Crawl +and Rosset (2020) came to the result, that the current best practice in training +large-scale language models involve using both large web scrapes and more +targeted, higher-quality datasets, which the Pile directly addresses. +2.3.1.1.4 +BooksCorpus +The last dataset for NLP is the BooksCorpus dataset (Zhu et al., 2015). The +BooksCorpus uses books from yet unplished authors from the web. Only books +with more than 20k words were included to filter out shorter, noisier stories. +This results in around 11k books from 16 different genres. So more than 74 +million sentences can be used in pre-training. BooksCorpus contains a sample +of books from a distributor of indie ebooks. Sadly a datasheet about the +BooksCorpus was not releasd with the corresponing paper. +Frankly, there was just an paragraph about the content and the extraction +inside the paper (Zhu et al., 2015). Bandy and Vincent (2021) addressed +exactly this short coming. They provided a retrospective datasheet about +the BooksCorpus. Some of their major concerns were copyright violations, +duplicate books, skewed genre representation, potentially skewed religious +representation and also problematic content (18+ content). Little harm can +be expected if an informed adults reads books with these concers, but how +does a language model contribute to for example well-documented gender +discrimination if it trains on these books. +Since BookCorpus is no longer distributed, one has to visit the distributor of +the indie ebooks and collect a own version of the BookCorpus. This is one of +the user-based dataset, besides to the datasets of the Pile. +2.3.1.2 +Computer Vision Dataset +2.3.1.2.1 +ImageNet +The next inspected modality is CV. Almost every state-of-the-art CV model +uses a classifier pre-trained on an ImageNet based dataset. ImageNet uses +the hierarchical structure of WordNet (Fellbaum, 2010). At the release of +ImageNet-1k the amount of classes was unheard at this time point. Datasets +like CIFAR-10 (Krizhevsky et al., 2009) and CIFAR-100 (Krizhevsky et al., +2009) had 10 or 100 classes, but ImageNet1k had 1000 different classes and this +was not the only major improvement. They also increased the resolution from +32×32 to 256×256. In all, there are roughly 1.2 million training images, 50,000 +validation images, and 150,000 testing images. The ImageNet-1k dataset is a +subset of the ImageNet dataset (Deng et al., 2009). The full ImageNet dataset +is also called ImageNet-21k. It consists of more than 14 million images, divided +in almost 22k classes. Because of this some paper described it as ImageNet-22k. +Those two dataset do not only differ by the amount of classes, but also by +the type of labels. The labels of ImageNet-21k are not mutually exclusive. + +2.3 Resources and Benchmarks for NLP, CV and multimodal tasks +59 +Because of this the pre-training wiht ImageNet-1k is far more popular. Also +the ImageNet-21k dataset lacks an official train-validation split, which is just +another reason why ImageNet-1k is more popular. The raw dataset ImageNet- +21k is around 1.3 terabyte (TB). It’s also nice, that the the dataset of ImageNet +are open available. The next dataset is in contrast to this, because it’s not +freely available. +2.3.1.2.2 +Joint-Foto-Tree (JFT) & Entity-Foto-Tree (EFT) +The Joint-Foto-Tree (JFT) 300M is one of the follow up version of the JFT +dataset (Hinton et al., 2015b). Given the name it consists of 300 million images +and on average each image has 1.26 labels. The whole datasets has around 375 +million labels. These labels can be divided into 18291 classes. These categories +form a rich hierarchy with the maximum depth of hierarchy being 12 and +maximum number of child for parent node being 2876 (Sun et al., 2017). For +example there are labels for 1165 types of animals and 5720 types of vehicles. +The work states that approximately 20% of the labels in this dataset are noisy +(Sun et al., 2017), because the labels are generated automatically. +It also provides the fact, that the distribution is heavily long-tailed, which +means that some of the classes have less than 100 images. There is also an +extendend version of the JFT dataset. +It’s called Entity-Foto-Tree (EFT), because the class labels are physical entities +organized in a tree-like hierarchy, which contains 20 diversified verticals and +consists of 100k classes. It’s even rarely used in practice by Google because of +the intolerable large model size and the slow training speed (Gao et al., 2017). +Honestly, nobody really knows what is inside these datasets, except Google +and they never published a datasheet about it. +These datasets are often used for image classification, but localization-sensitive +tasks like object detection and semantic segmentation are also of interest in +CV. +2.3.1.2.3 +Objects365 +Objects365 (Shao et al., 2019) is a large-scale object detection and semantic +segmentation freely available dataset. It contains 365 object categories with over +600K training images. More than 10 million, high-quality bounding boxes are +manually labeled through a three-step, carefully designed annotation pipeline. +The ImageNet datasets also contain bounding boxes, but compared Object365 +dataset the number of boxes per image is about 15.8 vs 1.1 (Deng et al., 2009). +They collected images mainly from Flicker to make the image sources more +diverse. All the images conform to licensing for research purposes. The dataset +also builds on a tree-like hierarchy with eleven super-categories (human and +related accessories, living room, clothes, kitchen, instrument, transportation, +bathroom, electronics, food (vegetables), office supplies, and animal). Further + +60 +2 Introducing the modalities +they proposed 442 categories which widely exists in daily lives. As some of +the object categories are rarely found, they first annotate all 442 categories +in the first 100K images and then they selected the most frequent 365 object +categories as their target objects. +To enable compatibility with the existing object detection benchmarks, the +365 categories include the categories defined in Microsoft Common Objects in +Context (COCO) (Lin et al., 2014b), which is described in the next paragraph. +2.3.1.2.4 +Microsoft Common Objects in Context (COCO) +Microsoft decided to employed a novel pipeline for gathering data with extensive +use of Amazon Mechanical Turk. Their goal was to create a non-iconic image +collection. Iconic-object images have a single large object in the centered +of the image. By this they provide high quality object instances, but they +also lack information of contextual important and non-canonical viewpoints +(Lin et al., 2014b). Recent work showed that non-iconic images are better +at generalizing (Torralba and Efros, 2011). They mostly used Flickr images, +because they tend to have fewer iconic images. This results in a collection +of 328,000 images. After getting the images they used workers on Amazon’s +Mechanical Turk for the annotation. The workers got a list with 91 categories +and 11 super-categories. At first a worker had to decide if a super-category +(e.g. animal) was present or not. If it was present he had to class the animal +into the appropriate subordinate category (dog, cat, mouse). This greatly +reduces the time needed to classify the various categories and took the workers +about 20k hours to complete. After this the workers had also to do instance +spotting and instance segmentation. For the instance segmentation the workers +had to complete a training task until their segmentation adequately matched +the ground truth. Only 1 in 3 workers passed this training stage. At the end +they added five written captions to each image in the dataset, which is called +Microsoft Common Objects in Context. +At the end they utilized more than 70,000 worker hours to collect a amount of +annotated object instances, which were gathered to drive the advancement of +segmentation algorithms and others tasks. COCO is a dataset which can be +used in CV and also in multi-modal models, because of the image-text pairs. +2.3.1.3 +Multi Modal Datasets +The Pile is an attempt from Eleuther to mimic the dataset used for GPT-3 +and LAION wants to achieve something similiar. Open AI collected more than +250 million text-images pairs from the internet to train CLIP and DALL- +E. This dataset does include parts of COCO, Conceptual Captions and a +filtered subset of the Yahoo Flickr Creative Commons 100 Million Dataset +(YFCC100M). YFCC100M contains of a total of 100 million media objects. +The collection provides a comprehensive snapshot of how photos and videos +were taken, described, and shared over the years, from the inception of Flickr + +2.3 Resources and Benchmarks for NLP, CV and multimodal tasks +61 +in 2004 until early 2014. Also this dataset was never published, even though +the used data is freely available. To address this shortcoming, LAION created +the LAION-400M. +2.3.1.3.1 +LAION 400M & 5B +LAION-400M (Schuhmann et al., 2021a) consists of 400 million image-text +pairs. They used Common Crawl and parsed out all HTML IMG tags containing +an alt-text attribute. As already mentioned these alt-texts can sometimes be +very uninformative. So they used CLIP to compute embeddings of the image +and alt-text and droped all samples with a similarity below 0.3. The dataset +also contains the CLIP embedding and kNN indices. Schuhmann et al. (2021a) +describes the procedure to create the dataset in an open manner. They also +ran DALLE-pytroch, an open-source replication of DALL-E, on a subset of +LAION-400M and produced samples of sufficient quality. This opens the road +for large-scale training and research of language-vision models, which was +previously not possible for everyone. It still is difficult, because of the large +amount of data, but at least it’s theoretically possible for everyone. LAION- +400M is also known as crawling@home (C@H), because they started as a small +group and used only their own computers at the beginning, which is like the +fight of David versus Goliath. +End of March 2022 the team of LAION released a 14× bigger than LAION- +400M dataset called LAION-5B. It consists of 5.85 billion CLIP-filtered image- +text pairs. A paper about the dataset is right now in progress, but the dataset is +already available to download if you have enough space. The size of the dataset +is about 240 TB in 384 or 80 TB in 224. Due to the nature of the extraction 2,3 +billion contain English language, 2,2 billion samples from 100+ other languages +and they also provide a search demo. At the moment LAION-5B is the biggest +openly accessible image-text dataset. +The amount of image-text pairs in LAION-400M or LAION-5B seems incom- +parable to COCO, but one has to keep in mind, that the text in the COCO +dataset is gathered in a high-quality manner. The COCO dataset is still used, +because of the high quality, even though it was created 2014. +2.3.1.3.2 +Localized Narratives +Localized Narratives choose a new form of connecting vision and language in +multi-modal image annotations (Pont-Tuset et al., 2020). They asked anno- +tators to describe an image with their voice while simultaneously hovering +their mouse over the region they are describing. This synchronized approach +enable them to determine the image location of every single word in the +description. Since the automatic speech recognition still results in imperfect +transcription, an additional transcription of the voice stream is needed to +get the written word. The manual transcription step might be skipped in the +future if automatic speech recognition improves and this would result in an + +62 +2 Introducing the modalities +even more effective approach. They collected Localized Narratives for, the +earlier introduced, COCO (Lin et al., 2014b) dataset, ADE20K (Zhou et al., +2017), Flickr30k & 32k datasets (Young et al., 2014) and 671k images of Open +Images(Kuznetsova et al., 2020). +Localized Narratives can be used in many different multi-modal tasks, since it +incorporates four synchronized modalities (Image, Text, Speech, Grounding). +Another difference is that the captions are longer than in most previous datasets +(Krishna et al., 2017; Kuznetsova et al., 2020; Lin et al., 2014b) and models +like Imagen (Saharia et al., 2022a) and Parti (Yu et al., 2022a) work well with +long prompts. Beside to that the 849k images with Localized Narratives are +publicly available (Website, 2020). +2.3.1.3.3 +WuDaoMM +English is the most spoken language on the world, but Mandarin Chinese +is on the second place and also increasing steadily. So we will also present +a large-scale Chinese multi-modal dataset WuDaoMM (Yuan et al., 2022). +Totally it consists of 650 million image-text pair samples but, they released a +base version dataset containing about 5 million image-text pairs. WuDaoMM +base includes 19 categories and 5 million high-quality images, which can be +used for most of Chinese vision-language model pre-training. They designed +two acquisition strategies according to the correlation types between text and +image. Their collection included data with weak relations, by this they mean +that the texts don’t have tp precisely describe their corresponding images to be +retained, and data with strong relations. These strong relation image-text pairs +were found on professional websites. Most of these images are reviewed for +relevance, content, and sensitivity when they are uploaded. The WuDaoMM- +base dataset is a balanced sub-dataset composed of each major category of +the strong-correlated dataset, which is sufficient to support the research and +use of current mainstream pre-training models. +2.3.1.3.4 +Wikipedia Image Text (WIT) +The Wikipedia Image Text (WIT) dataset ends this chapter. Most dataset +are only in English and this lack of language coverage also impedes research +in the multilingual mult-imodal space. To address these challenges and to +advance in research on multilingual, multimodal learning they presented WIT +(Srinivasan et al., 2021). They used Wikipedia articles and Wikimedia image +link to extract multiple different texts associated with an image. Additionally +a rigorous filtering was used to retain high quality image-text associations. +This results in a dataset, which contains more than 37.6 million image-text +sets and spans 11.5 million unique images. Due to the multi-modal coverage of +Wikipedia, they provide unique multilingual coverage – with more than 12K +examples in each of the 108 languages and 53 languages have more than 100K +image-text pairs. + +2.3 Resources and Benchmarks for NLP, CV and multimodal tasks +63 +Another thing which is worth pointing out, is that they could leverage +Wikipedia’s editing, verification and correction mechanism,to ensure a high- +quality bar. This curation can be seen an huge difference compared to the web +crawls used to create other existing datasets. At the end they even verified +the curated quality of the WIT dataset via an extensive human-annotation +process with an overwhelming majority of 98.5% judging the randomly sampled +image-text associations favorably. +These datasets were just some of the more used dataset. Some of them are +public available while some others are not public available. Normally each +dataset comes with a paper, which describes the procedure way more detailed +than this chapter. This chapter gives just a small insight into the different +datasets and wants to raise the interest into the corresponding papers. Papers +with code delivers research papers with code implementations by the authors +or community. One can get information about the State-of-the-Art model for +every modality and down-task. They also provide available datasets for all +possible tasks. +Datasets are crucial for research and exploration as, rather obviously, data is +required for performing experiments, analyzing designs, and building applica- +tions. A particular problem is that the collected data is often not made publicly +available. While this sometimes is out of necessity due to the proprietary or +sensitive nature of the data, this is certainly not always the case. A public +dataset with clearly marked licenses that do not overly impose restrictions on +how the data is used, such as those offered by CC, would therefore be suitable +for use by both academia and industry. But one has to keep in mind that an +effective dataset is a catalyst and accelerator for technological development +(Yuan et al., 2022). This may be a reason, why the big companies don’t share +their datasets, but there are also some other reasons. Another reason might be +the bias which is included in the datastes. +2.3.1.4 +Bias In Datasets +Internet access itself is not evenly distributed, which results in a narrow Internet +participation. So internet data overrepresents younger users and those from +developed countries. User-generated content sites present themselves as open +to anyone, but there are factors including moderation practices which make +them less welcoming to specific sub-populations. Take the training data of +GPT-2 as an example. It is sourced by scraping outbound links from Reddit, +and Pew Internet Research’s 2016 (MICHAEL BARTHEL and MITCHELL, +2016) survey reveals 67% of Reddit users in the United States are men, and +64% between ages 18 and 29. Sheng et al. (2019) shedded lights on some of +the gender bias. They used OpenAI’s GPT-2 to generate text given different +prompts. Some of the examples can be seen in the next table. + +64 +2 Introducing the modalities +Prompt +Generated text +The man worked as +a car salesman at the local Wal-Mart +The woman worked +as +a prostitute under the name of Hariya +The Black man +worked as +a pimp for 15 years +The White man +worked as +a police officer, a judge, a prosecutor, a prosecutor, and +the president of the United States +The gay person was +known for +his love of dancing, but he also did drugs +The straight person +was known for +his ability to find his own voice and to speak clearly +Datasets obviously encode the social bias that surrounds us, and models trained +on that data may expose the bias in their decisions. The predictions of the +models are based on what the model learned from so we habe to be aware of +this bias. +Dhamala et al. (2021) introduced the Bias in Open-Ended Language Gener- +ation Dataset (BOLD), a large-scale dataset that consists of 23,679 English +text generation prompts for bias benchmarking across five domains: profession, +gender, race, religion, and political ideology. They also proposed new auto- +mated metrics for toxicity, psycholinguistic norms, and text gender polarity to +measure social biases in open-ended text generation from multiple angles. An +examination of text generated from three popular language models (BERT, +GPT-2, CTRL) revealed that the majority of these models exhibit a large +social bias across all domains. It was also shown that GPT-2 conform more +to social biases than BERT and GPT-3 was trained on filtered version of the +Common Crawl dataset, developed by training a classifier to pick out those +documents that are most similar to the ones used in GPT-2’s training data. +So very likely the same goes for GPT-3. These biases don’t only persist in the +NLP datasets, they can also be found in other modalites. +There exists the so called WordNet Effect which leads to some bias in the CV +datasets. This effects emerges because WordNet includes words that can be +perceived as pejorative or offensive. N*****r and wh**e are just two examples +which can be found in WordNet. Prabhu and Birhane (2020) investigated +problematic practices and the consequences of large scale vision datasets. +Broad issues such as the question of consent and justice as well as specific +concerns such as the inclusion of verifiably pornographic images in datasets +were revealed. Two days after the publication of the paper (Prabhu and Birhane, +2020), the TinyImages was withdrawn, because of their findings. Torralba, +Fergus, Freeman, the creator of TinyImages, also argued that the offensive +images were a consequence of the automated data collection procedure that +relied on nouns from WordNet. MS-Celeb (Guo et al., 2016) was also retracted + +2.3 Resources and Benchmarks for NLP, CV and multimodal tasks +65 +for the same reasons. It would be very surprising if these kinds of problems +where not present in other databases for this kind of research, especially as we +get to extremely dataset sizes. Despite retractions, datasets like TinyImages +and MS-Celeb remain widely available through file sharing websites. +Even if LAION-400M opened the road for large-scale training and research of +language-vision models for everyone, their curation pipeline involves CLIP. One +might argue, that this approach will potentially generate CLIP-like models and +it is known that CLIP inherits various biases (Radford et al., 2021a). Birhane +et al. (2021) found that the LAION-400M dataset contains, troublesome and +explicit images and text pairs of rape, pornography, malign stereotypes, racist +and ethnic slurs, and other extremely problematic content and you can be +pretty sure that the same holds for LAION-5B, as it uses the same curation +pipeline. This shows even more that large institutions should open up their +datasets to both internal and external audits in a thoughtful manner. We +have to fully understand the risks of using such datasets and this is not +achievable by the used approach. Despite all these concerns, the next chapters +will demonstrate how the different datasets are used, but it is important to +keep these concerns in mind. +2.3.2 +Pre-Training Tasks +Yann LeCun and Ishan Misra suggest in their blogpost that supervised pre- +training is gone because of the already mentioned reasons at the beginning and +the future will be self-supervised pre-training (Yann and Ishan, 2021). Meta AI +wants to create a background knowledge in the models that can approximate +the common sense of humans. This suggestion is even more reasonable, because +recent work (Mineault, 2021) also showed that a self-supervised or a unsu- +pervised pre-training approach is biologically more plausible than supervised +methods. This why neuroscientists are taking interest in unsupervised and +self-supervised deep neural networks in order to explain how the brain works +(Zhuang et al., 2021). +Self-supervised learning (SSL) is also called predictive learning. This comes by +the nature of the process. The general technique of self-supervised learning +is to predict any unobserved or hidden part (or property) of the input from +any observed or unhidden part of the input (Yann and Ishan, 2021). Models +like BERT try to predict between known intervals and GPT-3 predicts the +future, given the past. A part of a sentence is hidden and the model tries to +predict the hidden words from the remaining ones. Predicting missing parts of +the input is one of the more standard tasks for SSL pre-training. To complete +a sentence with missing parts the system has to learn how to represent the +meaning of words, the syntactic role of words, and the meaning of entire texts. +These missing parts tasks are easy to implement in NLP compared to CV. In +NLP the solution space is finite, because one estimates a distribution from, + +66 +2 Introducing the modalities +a before specified, dictionary. In CV the solution space is infinite, because it +is not possible to explicitly represent all the possible frames and associate a +prediction score to them (Yann and Ishan, 2021). +Meta AI proposed an unified view of self-supervised method. They say an +energy-based model (EBM) is a system that, given two inputs, x and y, tells +us how incompatible they are with each other (Yann and Ishan, 2021). If the +energy is high, x and y are deemed incompatible; if it is low, they are deemed +compatible. +The idea sounds simple, but it is difficult to achieve this. An usual approach +is to take an image and create an augmented version of the image. By this +approach the energy has to be low, because it’s from save picture. For example +one can gray scale the image. By this we say the model the color does not +matter. Bromley et al. (1993) proposed this kind of approach under the name +Siamese networks. The difficulty is to make sure that the networks produce +high energy, i.e. different embedding vectors, when x and y are different images. +The problem is that these Siamese networks tend to collapse. When a collapse +occurs, the energy is not higher for nonmatching x and y than it is for matching +x and y. So the networks ignore their input and produce the same embeddings. +This lead to so called contrastive methods. The method used to train NLP +systems by masking or substituting some input words belongs to the category +of contrastive methods. Contrastive methods are based on the simple idea +of constructing pairs of x and y that are not compatible, and adjusting the +parameters of the model so that the corresponding output energy is large. The +problem is that they are very inefficient to train. For a contrastive methods +one needs so called hard negatives. These are images that are similar to image +x but different enough to still produce a high energy. This is a major issue +of contrastive methods. So Self-supervised representation learning relies on +negative samples to prevent collapsing to trivial solutions. +So the best idea is to get rid of the hard negatives and BYOL (Grill et al., +2020a) is one approach that achieved exactly this. They create two slightly +different variants of an image by applying two random augmentations, like a +random crop, a horizontal flip, a color jitter or a blur. A big difference to the +Siamese network is that they use different parameters in the encoder. They +use so called online and target parameters. The target parameters are never +learned, they are just copied over from the online parameters, but they use an +exponential moving average. So it’s some kind of a lagged version of the online +parameters. BYOL achieves to learn a representation of an image, without +using negative pairs, just by predicting previous versions of its outputs. +Still they say, that BYOL remains dependent on existing sets of augmentations +and these augmentations require human intention and automating the search +for these augmentations would be an important next step, if this is even +possible (Grill et al., 2020a). + +2.3 Resources and Benchmarks for NLP, CV and multimodal tasks +67 +He et al. (2022) recently came very close to the MLM pre-training used in +BERT with their masked autoencoder (MAE). They leveraged transformers +and autoencoders for self-supervised pre-training. An autoencoder is an encoder +that maps the observed signal to a latent representation, and a decoder that +reconstructs the original signal from the latent representation. The MAE is a +form of denoising autoencoding exactly like the MLM. Their approach is to +divide an image into, for example, 16 × 16 patches. Then remove 75% of the +patches and just use the remaining 25% in their huge encoder. Important to +add is that the position embeddings are also used in the encoder. The input of +the decoder is again the full set of tokens consisting of the unmasked and the +masked tokens. So the MAE has to reconstruct the input by predicting the pixel +values for each masked patch. Autoencoding pursues a conceptually different +direction compared to BYOl or DINO, which are based on augmentation. +Still their reconstructions look kind of blury, but the learned representations +are already very rich. Interesting to note is also that BERT removes only 15% +of the data where MAE removes 75% of the data. +Dual encoder models like CLIP (Radford et al., 2021a) and ALIGN (Jia et al., +2021b) demonstrated in the past that contrastive objectives on noisy image-text +pairs can lead to strong image and text representations. One thing to mention +is, that contrastive objectives are easier to implement in vision-language models +(VLM) than in CV. This comes from the fact that VLM use image-text pairs. +As a dual encoder CLIP encodes the image and text and by construction +the text which corresponds to the image or vice versa achieves the highest +similarity and the other texts will have a lower similarity. So one already has +some hard negatives already available and don’t has to search for some. +Through the SSL the models already learned a good representation of the given +input, but fine-tuning models leads to even better results. This chapter will +just provide an rough sketch, since fine-tuning heavily depends on the model +and the down-stream task. Also fine-tuning will be shown in later chapters. +Fine-tuning means updating the weights of a pre-trained model by training +on a supervised (labeled) dataset to a specific down-task. A huge amount of +data is needed to fine-tune a model. This is also the main disadvantage of +fine-tuning, because one needs new large dataset for every possible down-task. +After pre-training and fine-tuning the models there is a need to compare the +models, because one always seeks to find the best model among all competitors. +This need lead to the creation of datasets for test purposes which are often +called benchmarks. +2.3.3 +Benchmarks +As models got better over time, because of bigger datasets or better pre-training +tasks, it’s important to create and use new benchmarks. Interestingly there +are also benchmark, which rely only on Zero-Shot performance. Zero-shot + +68 +2 Introducing the modalities +learning (ZSL) is a problem in machine learning, where during test time, a +model observes samples from classes not observed during training. So it has to +complete a task without having received any training examples. By this the +model has to generalize on a novel category of samples. +But the most common approach is to use a part of the datasets which was not +used to train the model. To make this possible the pre-training datasets are +divided into training, test and validation sets. It’s clear that the models must +not be tested on the training data. +This splitting results in so called held-out data, but Rajpurkar et al. (2018) +showed, that this held-out datasets are often not comprehensive, and contain +the same biases as the training data. Recht et al. (2019) also proposed that +these held-out datasets may overestimate the real-world performance. +Something to consider is also that pre-training on large internet datasets may +lead to the unintentional overlap of pre-training and down-tasks. Because of +this studies (Radford et al., 2021a, Yu et al. (2022a), Brown et al. (2020)) +conducted a de-duplication analysis. CLIP analysis resulted in a median overlap +of 2.2% and an average overlap of 3.2%, but they also observed that the overall +accuracy is rarely shifted by more than 0.1% (Radford et al., 2021a). Mahajan +et al. (2018), Kolesnikov et al. (2019) also came to the similar results, but it’s +still something to keep in mind. +Some of the already mentioned datasets like COCO and the ImageNet versions +are often used for CV or VLM. Almost every state-of-the-art CV model uses +a classifier pre-trained on an ImageNet based dataset and benchmarked on +the validation sets of the dataset. A another small downer is that the models +of the big companies are usually trained on different datasets, but at least +compared on the same benchmarks. So the comparison seems a bit odd. Maybe +the better performance of the models comes from the different pre-training +datasets. +2.3.3.1 +Natural Language Processing Benchmarks +2.3.3.1.1 +(Super)GLUE +The goal of NLP is the development of general and robust natural language +understanding systems. Through SSL models gain a good “understanding” +of language in general. To benchmark this good “understanding” General +Language Understanding Evaluation (GLUE) was created. It’s a collection +of nine different task datasets. These datasets can be divided into the Single- +Sentence Tasks, Similarity and Paraphrase Tasks and Inference Tasks. +The Single-Sentence Tasks consist of the Corpus of Linguistic Acceptability +(CoLA) and The Stanford Sentiment Treebank (SST-2). Each example in the +CoLA is a sequence of words annotated with whether it is a grammatical English + +2.3 Resources and Benchmarks for NLP, CV and multimodal tasks +69 +sentence. SST-2 uses sentences from movie reviews and human annotations of +their sentiment. The task is to predict the sentiment of a given sentence. +For the Similarity and Paraphrase Tasks the Microsoft Research Paraphrase +Corpus (MRPC), Quora Question Pairs (QQP) and the Semantic Textual +Similarity Benchmark (STS-B) are used. MRPC is a corpus of sentence pairs +automatically extracted from online news sources, with human annotations for +whether the sentences in the pair are semantically equivalent. The model has +to predict if sentence B is a paraphrase of sentence A. The STS-B sub-task +dataset consist of a collection of sentence pairs drawn from news headlines, +video and image captions, and natural language inference data. Each pair is +human-annotated with a similarity score from 1 to 5. The task for the model is +to predict these similarity scores. QQP is a collection of question pairs from the +community question-answering website Quora. Here the model has to predict +if a pair of questions are semantically equivalent. +Lastly The Multi-Genre Natural Language Inference Corpus (MNLI), the +Stanford Question Answering Dataset (QNLI), The Recognizing Textual +Entailment (RTE) dataset and the Winograd Schema Challenge (WNLI) are +used in the Inference Tasks. WNLI is a crowdsourced collection of sentence +pairs with textual entailment annotations. The task is to predict whether +the premise entails the hypothesis (entailment), contradicts the hypothesis +(contradiction), or neither (neutral). QNLI is a question-answering dataset +consisting of question-paragraph pairs, where one of the sentences in the +paragraph contains the answer to the corresponding question. The task +is to determine whether the context sentence contains the answer to the +question. RTE comes from a series of annual textual entailment challenges. +WNLI is a reading comprehension task in which a system must read a +sentence with a pronoun and select the referent of that pronoun from a list +of choices. In the following table is a short summary of all GLUE tasks. +A nice topping is that GLUE also provides a leaderboard with a human +benchmark. So the models can compete against each other and a human + +Dataset +Description +Data example +Metric +Is the sentence grammatical or +"This building is than that one." +CoLA +ungrammatical? += Ungrammatical +Matthews +Is the movie review positive, negative, +"The movie is funny , smart , visually inventive , and most of all , alive ." +SST-2 += .93056 (Very Positive) +Accuracy +Or neutral? +Is the sentence B a paraphrase of +B) "The island reported another 35 probable cases yesterday , taking its total to 418 ." +MRPC +sentence A? += A Paraphrase +Accuracy / F1 +B) "A herd of elephants are walking along a trail." +STS-B +How similar are sentences A and B? += 4.6 (Very Similar) +Pearson / Spearman +B) "How can Internet speed be increased by hacking through DNs?" +QQP +Are the two questions similar? += Not Similar +Accuracy / F1 +A) "Tourist Information offices can be very helpful." +Does sentence A entail or contradict +B) "Tourist Information offices are never of any help." +MNLI-mm += Contradiction +Accuracy +sentence B? +A) "What is essential for the mating of the elements that create radio waves?" +Does sentence B contain the answer to + to the electromagnetic field." +QNLI +the question in sentence A? += Answerable +Accuracy +A) "ln 2oo3, Yunus brought the microcredit revolution to the streets of Bangladesh to support +more than 5O,0o0 beggars, whom the Grameen Bank respectfully calls Struggling Members." +B) "Yunus supported more than 50,000 Struggling Members." +RTE +Does sentence A entail sentence B? += Entailed +Accuracy +Sentence B replaces sentence A's +A) "Lily spoke to Donna, breaking her concentration." +ambiguous pronoun with one of the +B) "Lily spoke to Donna, breaking Lily's concentration." +WNLI +nouns - is this the correct noun? += Incorrect Referent +Accuracy70 +2 Introducing the modalities +benchmark. After a short period of time the models started to surpass the +human benchmark, which lead to creation of SuperGLUE. +SuperGLUE also consists of a public leaderboard built around eight language +understanding tasks, drawing on existing data, accompanied by a single-number +performance metric, and an analysis toolkit. SuperGLUE surpassed GLUE +because of more challenging tasks, more diverse task formats, comprehensive +human baslines, improved code support and refinded usage rules. The following +figure gives a short summary of the SuperGLUE tasks. +FIGURE 2.31: taken from https://mccormickml.com +The GLUE and SuperGLUE tasks are more or less reduced to a classification +problem. One might argue if this is really General Language Understanding, +but we will see other benchmarks which try evaluate that in an other way. +However it’s also of interest to check if the models understand what they + +BoolQ +Passage: Barg's - Barg's is an American soft drink. Its brand of root beer is notable for having caffeine. +Barg's, created by Edward Barg and bottled since the turn of the 2Oth century, is owned by the Barq +family but bottled by the Coca-Cola Company. It was known as Barq's Famous Olde Tyme Root Beer +until 2012. +Question: is barg's root beer a pepsi product Answer: No +Text: B: And yet, uh, I we-, I hope to see employer based you know, helping out. You know, child uh +care centers at the place of employment and things like that, that will help out. A: Uh-huh. B: What do +you think, do you think we are, setting a trend? +Hypothesis: they are setting a trend Entailment: Unknown +COPA +Premise: My body cast a shadow over the grass. +Question: What's the CAUSE for this? +Alternative 1: The sun was rising.Alternative 2: The grass was cut. +Correct Alternative: 1 +Paragraph: Susan wanted to have a birthday party. She called all of her friends. She has five friends. +MultiR +Her mom said that Susan can invite them all to the party. Her first friend could not go to the party +because she was sick. Her second friend was going out oftown. Her third friend was not so sure if her +parents would let her: The fourth friend said maybe. The fifth friend could go to the party for sure. Susan +was a little sad. On the day of the party, all five friends showed up. Each friend had a present for Susan +Susan was happy and sent each friend a thank you card the next week +Question: Did Susan's sick friend recover? Candidate answers: Yes, she recovered (T), No (F), Yes +(T), No, she didn't recover (F), Yes, she was at Susan's party (T) +Paragraph: (CNN) Puerto Rico on Sunday overwhelmingly voted for statehood But Congress, the only +body that can approve new states, will ultimately decide whether the status of the US commonwealth +changes. Ninety-seven percent of the votes in the nonbinding referendum favored statehood an increase +over the results of a 2ol2 referendum official results from the State Electorcal Commission show It +was the fifth such vote on statehood. '"Today we the people of Puerto Rico are sending a strong and +clear message to the Us Congress ... and to the world ... claiming our equal rights as American citizens +Puerto Rico Gov Ricardo Rossello said in a news release. @highlight Puerto Rico voted Sunday in +favor of US statehood +Query For one, they can truthfully say, "Don't blame me, I didn't vote for them, " when discussing the + presidency +Correct Entities: Us +Text: Dana Reeve, the widow of the actor Christopher Reeve, has died of lung cancer at age 44, +according to the Christopher Reeve Foundation. +Hypothesis: Christopher Reeve had an accident. +Entailment: False +Context 1: Room and board. +Context 2: He nailed boards across the windows. +Sense match: False +Text: Mark told Pete many lies about himself which Pete included in his book. He should have been +more truthful. +Coreference: False2.3 Resources and Benchmarks for NLP, CV and multimodal tasks +71 +are reading. The act of understanding what you are reading is called reading +comprehension (RC). RC requires both understanding of natural language and +knowledge about the world. +2.3.3.1.2 +Stanford Question Answering Dataset (SQuAD) (1.0 & 2.0) +Rajpurkar et al. (2016) introduced the Stanford Question Answering Dataset +(SQuAD), a large reading comprehension dataset on Wikipedia articles with +human annotated question-answer pairs. SQuAD contains 107,785 question- +answer pairs on 536 articles and it does not provide a list of answer choices +for each question. The model must select the answer from all possible spans +in the passage, thus needing to cope with a fairly large number of candidates. +The problem is that the it’s guaranteed that the answer exist in the context +document. +To address this weakness Rajpurkar et al. (2018) presented SQuAD 2.0, the +latest version of SQuAD. SQuAD 2.0 combines existing SQuAD data with +over 50,000 unanswerable questions written adversarially by crowdworkers to +look similar to answerable ones. +Rajpurkar et al. (2018) contribution to NLP is not that they provide a deeper +glimpse into the workings of QA systems, they also facilitated the creation +of more non-English datasets. Korean, Russian, Italian, Spanish, French and +Arabic versions of SQuAD exist around the world. XQuAD, MLQA and TyDi +are multilingual question-answering datasets. XQuAD is a subset of SQuAD +translated into 10 different language by professional translators. These kinds +of resources are crucial in ensuring that the societal benefits of NLP can also +be felt by speakers of lower resourced languages. +2.3.3.1.3 +Beyond the Imitation Game Benchmark (BIG-bench) +The mentioned ones are rather old compared to Beyond the Imitation Game +Benchmark (BIG-bench) (Srivastava et al., 2022). It’s a collaborative bench- +mark intended to probe large language models and extrapolate their future +capabilities. BIG-bench already contains more than 200 tasks. They claim +that current language-modeling benchmarks are insufficient to satisfy our need +to understand the behavior of language models and to predict their future +behavior. They mainly provide three reasons for that. One of them is the +short useful lifespans. When human-equivalent performance is reached for +these benchmarks, they are often either discontinued. One might call this +“challenge-solve-and-replace” evaluation dynamic. +To prevent this they encourage new task submissions and literally everybody +can submit a task to BIG-Bench. So they call BIG-bench a living benchmark. +The review of the tasks is based on ten criteria. It includes for example +“Justification”. One has to give background motivating why this is an important +capability of large language models to quantify. With the inclusion of small tasks + +72 +2 Introducing the modalities +they want to improve the diversity of topics covered and enable domain experts +to contribute tasks without the difficulties of distributed human labeling. +Another reason for the insufficients is because the others benachmarks are +narrowly targeted, and because their targets are often ones that language +models are already known to perform. So it’s not possible to identify new +and unexpected capabilities that language models may develop with increased +scale, or to characterize the breadth of current capabilities. +Finally, many current benchmarks use data collected through human labeling +that is not performed by experts or by the task authors. Their benchmark tasks +are primarily intended to evaluate pre-trained models, without task-specific +fine-tuning. By focusing on such tasks in the zero- and few-shot evaluation +setting, it becomes possible to provide meaningful scores for even those tasks +with a very small number of examples. +The “everybody can submit” strategy also leads to inclusion a variety of +tasks covering non-English languages. Till now the large language models, like +GPT-3 and PaLM, perform poorly on BIG-bench relative to expert humans, +which is maybe a good sign for the future. But superhuman performance on +SuperGLUE benchmark was achieved in less than 18 months after it was +produced. +2.3.3.1.4 +WMT +There is a family of datasets which is the most popular datasets used to +benchmark machine translation systems. Workshop on Machine Translation +(WMT) is the main event for machine translation and machine translation +research. This conference is held annually. WMT includes competitions on +different aspects of machine translation. These competitions are known as +shared tasks. Typically, the task organisers provide datasets and instructions. +Then teams can submit their output of their models. The submissions are +ranked with human evaluation. +Most of the models are evaluated on bi-lingual translation like English-to- +German, but there are also tri-linguar tasks like using English to improve +Russian-to-Chinese machine translation. One of the most popular NLP metrics +is called the Bleu Score and this metric is also used in the WMT tasks. It +is based on the idea that the closer the predicted sentence is to the human- +generated target sentence, the better it is. Bleu Scores are between 0 and 1, +but a score of 0.6 or 0.7 is considered the best you can achieve. +Problematic is that Bowman and Dahl (2021) claim that the evaluation for +many natural language understanding (NLU) tasks are broken. They claim that +unreliable and biased systems score so highly on standard benchmarks that +there is little room for researchers who develop better systems to demonstrate +their improvements. They provide four criteria to handle this: + +2.3 Resources and Benchmarks for NLP, CV and multimodal tasks +73 +1. +Good performance on the benchmark should imply robust in-domain +performance on the task +2. +Benchmark examples should be accurately and unambiguously an- +notated +3. +Benchmarks should offer adequate statistical power +4. +Benchmarks should reveal plausibly harmful social biases in systems, +and should not incentivize the creation of biased systems +Building new benchmarks that improve upon these four axes is likely to be +quite difficult. +2.3.3.1.5 +CheckList +Inspired by principles of behavioral testing in software engineering, Ribeiro et al. +(2020) introduced CheckList, a model-agnostic and task-agnostic methodology +for testing NLP models. CheckList includes a matrix of general linguistic +capabilities and test types that facilitate comprehensive test ideas, as well as +a software tool to generate a large and diverse number of test cases quickly. +To break down potential capability failures into specific behaviors, CheckList +introduces three different test types. A Minimum Functionality test (MFT), +inspired by unit tests in software engineering, is a collection of simple examples +to check a behavior within a capability. An Invariance test (INV) is when +label-preserving perturbations to inputs are applied and the model prediction +are expected to remain the same. A Directional Expectation test (DIR) is +similar, except that the label is expected to change in a certain way. +Tests created with CheckList can be applied to any model, making it easy to +incorporate in current benchmarks or evaluation pipelines and CheckList is +open source. Their goal was to create a benchmark which goes beyond just +accuracy on held-out data. +2.3.3.2 +Computer Vision Benchmarks +CV models try to answer visual tasks. A visual task is a task which can be solved +only by visual input. Often visual task can be solved as a binary classification +problem, which is called image classification, but there are also numerous other +applications for CV. This chapter will focus on image classification, semantic +segmentation and object detection with their usual benchmarks datasets. +2.3.3.2.1 +ImageNet Versions +It’s not only common to pre-train your model on ImageNet datasets it’s also +common to benchmark the models on them. There are many different variants +of ImageNet. There is ImageNet-R, a version with non-natural images such +as art, cartoons and sketches, or ImageNet-A, which is a a more challenging +version because they use adversarial images (Goodfellow et al., 2014d), or +ImageNet-V2 (Recht et al., 2019). The last was created to check whether + +74 +2 Introducing the modalities +there is an over-fitting on the classic pre-training ImageNet dataset. They +followed the creation process of the original dataset and tested to what extent +current classification models generalize to new data. Recht et al. (2019) found +accuracy drops for all models and suggested that these drops are not caused +by adaptivity, but by the models’ inability to generalize to slightly “harder” +images than those found in the original test sets. +The goal of image classification is to classify the image by assigning a label. +Typically, Image Classification refers to images in which only one object +appears. To asses the performance one mainly uses Top-1 accuracy, the model’s +answer with highest probability must be exactly the expected answer, or Top-5 +accuracy. Top-5 accuracy means that any of five highest probability answers +must match the expected answer. Beyer et al. (2020) tried to answer the +question “Are we done with ImageNet?” in their paper. Many images of the +ImageNet dataset contain a clear view on a single object of interest: for these, a +single label is an appropriate description of their content. However many other +images contain multiple, similarly prominent objects, limiting the relevance of +a single label (Beyer et al., 2020). In these cases, the ImageNet label is just +one of many equally valid descriptions of the image and as a result an image +classifier can be penalized for producing a correct description that happens to +not coincide with that chosen by the ImageNet label. +In short a single label per image is not sufficient in many cases. They concluded +yes and no as an answert to the question “Are we done with ImageNet?”. The +shortcomings of ImageNet labels and their accuracy were identified and they +provided a new ImageNet validation set ReaL (Beyer et al., 2020) (“Reassessed +Labels”) and also a new metric, called ReaL accuracy (Beyer et al., 2020). The +ReaL accuracy measures the precision of the model’s top-1 prediction, which +is deemed correct if it is included in the set of labels. these findings suggested +that although the original set of labels may be nearing the end of their useful +life, ImageNet and its ReaL labels can readily benchmark progress in visual +recognition for the foreseeable future. +An addition of a localization tasks to the classification tasks results into object +detection. It is used to analyze more realistic cases, like mentioned above, in +which multiple objects may or may not exist in an image. The location of an +object is typically represented by a bounding box. +2.3.3.2.2 +MS-COCO & Object365 +In the recent years, the Microsoft COCO dataset or the Object365 data have +become the standards to evaluate object detection algorithms, but it’s also +possible to use a ImageNet dataset. The primary challenge metric is called mean +Average Precision (mAP) at Intersection over Union (IoU) =.50:.05:.95. The +IoU is the intersection of the predicted and ground truth boxes divided by the +union of the predicted and ground truth boxes. IoU, also called Jaccard Index, +values range from 0 to 1. Where 0 means no overlap and 1 means perfect overlap. + +2.3 Resources and Benchmarks for NLP, CV and multimodal tasks +75 +But how is precision captured in the context of object detection? Precision is +known as the ratio of True Positive/(True Positive+False Positive). With +the help of the IoU threshold, it’s possible to decide whether the prediction is +True Positive(TP), False Positive(FP), or False Negative(FN). The example +below shows predictions with IoU threshold α set at 0.5. +The +.50:.05:.95 +means +that +one +uses +10 +IoU +thresholds +of +{0.50, 0.55, 0.60, . . . , 0.95}. COCO uses this as primary metric, because +it rewards detectors with better localization (Mircosoft, 2019). +Object detection and image segmentation are both tasks which are concerned +with localizing objects of interest in an image, but in contrast to object detection +image segmentation focuses on pixel-level grouping of different semantics. +Image segmentation can be splitted into various tasks including instance +segmentation, panoptic segmentation, and semantic segmentation. Instance +segmentation is a task that requires the identification and segmentation of +individual instance in an image. Semantic segmentation is a task that requires +segmenting all the pixels in the image based on their class label. Panoptic +segmentation is a combination of semantic and instance segmentation. The +task is to classify all the pixels belonging to a class label, but also identify +what instance of class they belong to. Panoptic and instance segmentation is +often done on COCO. +2.3.3.2.3 +ADE20k +Semantic segmentation can be done one ADE20K(Zhou et al., 2017). ADE +are the first three letters of the name Adela Barriuso, who single handedly +annotated the entire dataset and 20K is a reference to being roughly 20,000 +images in the dataset. This dataset shows a high annotation complexity, because +any image in ADE20K contains at least five objects, and the maximum number +of object instances per image reaches 273. To asses the performance of a model +on the ADE20K dataset one uses the mean IoU. It indicates the IoU between +the predicted and ground-truth pixels, averaged over all the classes. +In contrast to the object detection task, the definition of TP, FP, and FN is +slightly different as it is not based on a predefined threshold. TP is now the + +α= 0.5 +α= 0.5 +α= 0.5 +10U=0.96 +IOU=0.22 +10U=0.00 +TruePositive +FalsePositive +FalseNegative76 +2 Introducing the modalities +area of intersection between Ground Truth and segmentation mask. FP is the +predicted area outside the Ground Truth. FN is the number of pixels in the +Ground Truth area that the model failed to predict. The calculation of IoU is +the same as in object detection tasks. It’s the intersection of the predicted and +ground truth boxes aka. TP divided by the union of the predicted and ground +truth boxes, which is essentially TP + FN + FP. A example is shown down +below. +FIGURE 2.32: taken from https://learnopencv.com +2.3.3.3 +Multi-Modal Benchmarks +Visual understanding goes well beyond object recognition or semantic segmen- +tation. With one glance at an image, a human can effortlessly imagine the +world beyond the pixels. This is emphasized by the quote “a picture says more +then a thousand words”. High-order of cognition and commonsense reasoning +about the world is required to infer people’s actions, goals, and mental states. +To answer visual understanding tasks a models needs to leverage more than +one modality. +2.3.3.3.1 +Visual Commonsense Reasoning (VCR) +Visual understanding tasks require seamless integration between recognition +and cognition and this task can be formalize as Visual Commonsense Reasoning +(VCR). Zellers et al. (2019) introduce a new dataset called VCR. It consists +of 290k multiple choice QA problems derived from 110k movie scenes. The +key recipe for generating non-trivial and high-quality problems at scale is +Adversarial Matching. Incorrect choices are obtained via maximum-weight +bipartite matching between queries and responses. This matching transforms +rich annotations into multiple choice questions with minimal bias. VCR casted +as a four-way multiple choice task. +The underlying scenes come from the Large Scale Movie Description Challenge +and YouTube movie clips and they searched for interesting an diverse situa- +tions to ensure this they trained and applied an “interestingnes filter”. The +most interesting images were passed to Workers of Amazon Mechanical Turk. +Additional context in form of video caption was given to the worker. After + +TP +FN +Ground Truth Mask +Predicted Mask2.3 Resources and Benchmarks for NLP, CV and multimodal tasks +77 +reading this they had to propose one to three questions about the image. For +each question, they had to provide a reasonable answer and a rationale. This +results is an underlying dataset with high agreement and diversity of reasoning. +Almost every answer and rationale is unique. To make these cognition-level +questions simple to ask, and to avoid the clunkiness of referring expressions, +VCR’s language integrates object tags ([person2]) and explicitly excludes re- +ferring expressions (‘the woman on the right.’). These object tags are detected +from Mask-RCNN. The following types of questions are in the benchmarks: +38% Explanation (‘Why is [person11] wearing sunglasses inside?’), 24% Activ- +ity (’What are [person1] and person[2] doing¿‘), 13% Temporal (”What will +[person6] do after unpacking the groceries?“), 8% Mental, 7% Role, 5% Scene, +5% Hypothetical. +So in this setup, a model is provided a question, and has to pick the best answer +out of four choices. Only one of the four is correct. If the model answered +correctly a new question, along with the correct answer, is provided. Now the +model has to justify it by picking the best rationale out of four choices. The +first part is called Question Answering (Q → A) and the second part Answer +Justification (QA → R). They combine both parts into a Q → AR metric, in +which a model only gets a question right if it answers correctly and picks the +right rationale. If it gets either the answer or the rationale wrong, the entire +prediction will be wrong. Models are evaluated in terms of accuracy. +The results at the release were that humans find VCR easy (over 90% accuracy), +and state-of-the-art vision models struggle ( 45%). At the moment of writing, +the best model achieves 85.5 in (Q → A), 87.5 in (QA → R) and 74.9 in +Q → AR. So the models are closing the gap but VCR is still far from solved. +An “simpler” approach to evaluate vision-language models is to ask questions +without reasoning about an image. +2.3.3.3.2 +Visual Question Answering 1.0 & 2.0 (VQA) +For this reason Antol et al. (2015) created an open-ended answering task +and a multiple-choice task. Their dataset contains roughly 250k images, 760k +questions, and 10M answers. 204k images are taken from the MS COCO +dataset but also newly created created datasets are used. Three questions +were collected for each image or scene. Each question was answered by ten +subjects along with their confidence. The dataset contains over 760K questions +with around 10M answers. “what”-, “how”-, “is”- questions are mainly used in +the benchmark. But they had major flaws in their creation. An model which +blindly answering “yes” without reading the rest of the question or looking at +the associated image results in a VQA accuracy of 87% or the most common +sport answer “tennis” was the correct answer for 41% of the questions starting +with “What sport is”, and “2” is the correct answer for 39% of the questions +starting with “How many” (Antol et al., 2015). + +78 +2 Introducing the modalities +Zhang et al. (2016b) pointed out a particular ‘visual priming bias’ in the VQA +dataset. Zhang et al. (2016b) showed that language provides a strong prior that +can result in good superficial performance, without the underlying models truly +understanding the visual content. Zhang et al. (2016b) collected a balanced +dataset containing pairs of complementary scenes to reduce or eliminate the +strong prior of the language. Goyal et al. (2017) did the same and made a +second iteration of the Visual Question Answering Dataset and Challenge +(VQA v2.0). Goyal et al. (2017) balanced the popular VQA dataset (Antol +et al., 2015) by collecting complementary images such that every question in +balanced dataset is associated with not just a single image, but rather a pair +of similar images that result in two different answers to the question. The +dataset is by construction more balanced than the original VQA dataset and +has approximately twice the number of image-question pairs. +2.3.3.4 +GQA +Hudson and Manning (2019) introduced the GQA dataset for real-world visual +reasoning and compositional question answering. It consists of 113K images +and 22M questions of assorted types and varying compositionality degrees, mea- +suring performance on an array of reasoning skills such as object and attribute +recognition, transitive relation tracking, spatial reasoning, logical inference +and comparisons. They also proposed Consistency, Validity and Plausibility +as new measures to get more insight into models’ behavior and performance. +Consistency measures responses consistency across different questions. To +achieve a high consistency a model may require deeper understanding of the +question semantics in context of the image. The validity metric checks whether +a given answer is in the question scope, e.g. responding some color to a color +question. The plausibility score goes a step further, measuring whether the +answer is reasonable, or makes sense, given the question (e.g. elephant usually +do not eat pizza). +They even made a comparison between GQA and VQA 2.0. They came to +the conclusion that the questions of GQA are objective, unambiguous, more +compositional and can be answered from the images only, potentially making +this benchmark more controlled and convenient for making research progress +on. Conversely, VQA questions tend to be a bit more ambiguous and subjective, +at times with no clear and conclusive answer. Finally, we can see that GQA +provides more questions for each image and thus covers it more thoroughly +than VQA. +2.3.3.4.1 +Generative Benchmarks +Almost everybody is talking right now about generative models like DALL-E2, +Imagen, Parti. It seems like every month a new one is presented. But how can +we compare these models? Automatic image quality and automatic image-text +alignment are two reasonable evaluation metrics. Fréchet Inception Distance + +2.3 Resources and Benchmarks for NLP, CV and multimodal tasks +79 +(FID) can be used as primary automated metric for measuring image quality. +The Frechet Inception Distance compares the distribution of generated images +with the distribution of real images that were used to train the generator. +A small value is wanted, as it’s a distance measure. Text-image fit can be +captured through automated captioning evaluation. For this an image output +by the model is captioned with a model, which is able to do image captioning. +The similarity of the input prompt and the generated caption is then assessed +via BLEU, CIDEr, METEOR and SPICE and also human evaluation is done. +Here different generative models are used with the same prompts and the +human is asked to choose which output is a higher quality image and which +is a better match to the input prompt. One always has to keep in mind, that +the images of the generative models are always “cherry picked”. They do not +typically represent, for example, a single shot interaction in which the model +directly produces such an image. To make this clear, Yu et al. (2022a) showed +their way of growing the cherry tree. +FIGURE 2.33: taken from Parti Paper + +A smiling sloth +a +A van parked on grass. +2 +A smiling sloth wearing a leather jacke, +A smiling sloth wearing a bowtie and +A van with a cityscape painted on it +a cowboy hatand a kilt +holding a quarterstaff and a big book. +A shiny VW van parked on grass. +and parked on grass. +3 +A smiling sloth wearing a bowtie and holding a +A shiny VW van with a cityscape painted on it and +and a bowtie. The sloth is holding a quarterstaff and a big book +quarterstaff and a big book. A shiny VW van parked on +parked on grass. +grass, +C +A smiling sloth wearing a leather jacket, a cowboy hat, a kilt +A smiling sloth wearing a leather jacket, a cowboy hat, a kilt +A smiling sloth is wearing a leather jacket, a cowboy hat, a +and a bowtie. The sloth is holding a quarterstaff and a big +and a bowtie. The sloth is holding a quarterstaff and a big +kilt and a bowtie. The sloth is holding a quarterstaff and a +book. A shiny VW van with a cityscape painted on it and +book. The sloth stands a few feet in front of a shiny VW van. +big book. The sloth is standing on grass a few feet in front of +parked on grass. +The van has a cityscape painted on it and parked on grass. +a shiny VW van with flowers painted on it. wide-angle lens +b +from below. +a +C80 +2 Introducing the modalities +2.3.3.4.2 +PartiPrompts, DrawBench, Localized Narratives +In a sense, this is a form of model whispering as one stretches such models to +their limits. Besides to that they also present PartiPrompts (P2) which is a set +of over 1600 (English) prompts curated to measure model capabilities across a +variety of categories and controlled dimensions of difficulty. P2 prompts can +be simple, but can also be complex, such as 67-word description they created +for Vincent van Gogh’s The Starry Night. DrawBench is a similar dataset. +Also the Localized Narratives dataset from the dataset section consists of long +prompts and though it can also be used as a benchmark for generative models. +Current benchmarks give a good perspective on model performance on a wide +range of V&L tasks, but the field is only starting to assess why models perform +so well and whether models learn specific capabilities that span multiple V&L +tasks. +2.3.3.4.3 +FOIL it! +Shekhar et al. (2017) proposed an automatic method for creating a large dataset +of real images with minimal language bias and some diagnostic abilities. They +extended the MS-COCO dataset and created FOIL-COCO. FOIL stands for +“Find One mismatch between Image and Language caption” and consists +of images associated with incorrect captions. The captions are produced by +introducing one single error (or ‘foil’) per caption in existing, human-annotated +data. So each datapoint FOIL-COCO can be described as triplet consisting of +an image, original and foil caption. Their data generation process consists of +four main steps: +1. +Generation of replacement word pairs +2. +Splitting of replacement pairs into training and testing +3. +Generation of foil captions +4. +Mining the hardest foil caption for each image +The models are evaluated on three different tasks. The first one is Correct +vs. foil classification. Given an image and a caption, the model is asked to mark +whether the caption is correct or wrong. The aim is to understand whether +LaVi models can spot mismatches between their coarse representations of +language and visual input. The second task is Foil word detection. Given an +image and a foil caption, the model has to detect the foil word. The aim is +to evaluate the understanding of the system at the word level. The last task +Foil word correction. Given an image, a foil caption and the foil word, the +model has to detect the foil and provide its correction. The aim is to check +whether the system’s visual representation is fine-grained enough to be able +to extract the information necessary to correct the error. Their hypothesis is +that systems which, like humans, deeply integrate the language and vision +modalities, should spot foil captions quite easily. + +2.3 Resources and Benchmarks for NLP, CV and multimodal tasks +81 +2.3.3.4.4 +VALSE +Vision And Language Structured Evaluation (VALSE) (Parcalabescu et al., +2022) builds on the same idea. This benchmark aims to gauge the sensitivity +of pre-trained V&L models to foiled instances. They coverd a wide spectrum +of basic linguistic phenomena affecting the linguistic and visual modalities: +existence, plurality, counting, spatial relations, actions, and entity coreference. +To generate the foils they first use strong language models to propose foil and +second they use natural language inference to filter out captions that still can +describe the image. To do this in an automatic fashion they use the image +as an premise and the caption its entailed hypothesis. Additionally they use +the captian as an premise and the foil as the hypothesis. If an NLI model +predicts the foil to be neutral or a contradiction with respect to the caption, +they see this as an indicator for a good foil. At last the used human annotators +to validate all generated testing data. Mainly the MS-COCO dataset is used. +VALSE is as a task-independent, zero-shot benchmark to assess the extent to +which models learn to ground specific linguistic phenomena as a consequence +of their pretraining. +2.3.3.5 +Other Benchmarks +As we don’t live in a world with unlimited resources, it’s also important to +keep track of how much energy is consumed to train the models and how +big the carbon footprint is. Strubell et al. (2019b) investigated some NLP +models and benchmarked model training and development costs in terms of +dollars and estimated CO2 emissions. They came to the result that training a +single BERT base model without hyperparameter tuning on GPUs requires +the same energy as a trans-American flight. On average a human is responsible +for 5t CO2 per year and Strubell et al. (2019b) estimated that the training +procedure of a big Transformer with neural architecture search emitted 284t of +CO2. Works (Lottick et al., 2019, Henderson et al. (2020)) have released online +tools to benchmark their energy usage and initiatives such as the SustainNLP +workshop have since taken up the goal of prioritizing computationally efficient +hardware and algorithms. These findings are just some points one should keep +in mind. +In the following chapters we will see how the multimodal architectures use +these datasets and also how they perform on the given benchmarks. + + +3 +Multimodal architectures +Authors: Luyang Chu, Karol Urbanczyk, Giacomo Loss, Max Schneider, Steffen +Jauch-Walser +Supervisor: Christian Heumann +Multimodal learning refers to the process of learning representations from +different types of input modalities, such as image data, text or speech. Due to +methodological breakthroughs in the fields of Natural Language Processing +(NLP) as well as Computer Vision (CV) in recent years, multimodal models +have gained increasing attention as they are able to strengthen predictions +and better emulate the way humans learn. This chapter focuses on discussing +images and text as input data. The remainder of the chapter is structured as +follows: +The first part “Image2Text” discusses how transformer-based architectures +improve meaningful captioning for complex images using a new large scale, +richly annotated dataset COCO (Lin et al., 2014c; Cornia et al., 2020). While +looking at a photograph and describing it or parsing a complex scene and +describing its context is not a difficult task for humans, it appears to be much +more complex and challenging for computers. We start with focusing on images +as input modalities. In 2014 Microsoft COCO was developed with a primary goal +of advancing the state-of-the-art (SOTA) in object recognition by diving deeper +into a broader question of scene understanding (Lin et al., 2014c). “COCO” in +this case is the acronym for Common Objects in Context. It addresses three +core problems in scene understanding: object detection (non-iconic views), +segmentation, and captioning. While for tasks like machine translation and +language understanding in NLP, transformer-based architecture are already +widely used, the potential for applications in the multi-modal context has not +been fully covered yet. With the help of the MS COCO dataset, the transformer- +based architecture “Meshed-Memory Transformer for Image Captioning” (M 2) +will be introduced, which was able to improve both image encoding and the +language generation steps (Cornia et al., 2020). The performance of M 2 and +other different fully-attentive models will be compared on the MS COCO +dataset. +Next, in Text2Image, the idea of incorporating textual input in order to generate +visual representations is described. Current advancements in this field have been +made possible largely due to recent breakthroughs in NLP, which first allowed +83 + +84 +3 Multimodal architectures +for learning contextual representations of text. Transformer-like architectures +are being used to encode the input into embedding vectors, which are later +helpful in guiding the process of image generation. The chapter discusses the +development of the field in chronological order, looking into details of the most +recent milestones. Concepts such as generative adversarial networks (GAN), +variational auto-encoders (VAE), VAE with vector quantization (VQ-VAE), +diffusion, and autoregressive models are covered to provide the reader with a +better understanding of the roots of the current research and where it might be +heading. Some of the most outstanding outputs generated by state-of-the-art +works are also presented in the chapter. +The third part, “Images supporting Language Models”, deals with the inte- +gration of visual elements in pure textual language models. Distributional +semantic models such as Word2Vec and BERT assume that the meaning of +a given word or sentence can be understood by looking at how (in which +context) and when the word or the sentence appear in the text corpus, namely +from its “distribution” within the text. But this assumption has been histor- +ically questioned, because words and sentences must be grounded in other +perceptual dimensions in order to understand their meaning (see for example +the “symbol grounding problem”; Harnad, 1990). For these reasons, a broad +range of models has been developed with the aim to improve pure language +models, leveraging the addition of other perceptual dimensions, such as the +visual one. This subchapter focuses in particular on the integration of visual +elements (here: images) to support pure language models for various tasks +at the word-/token-level as well as on the sentence-level. The starting point +in this case is always a language model, into which visual representations +(extracted often with the help of large pools of images rom data sets like MS +COCO, see chapter “Img2Text” for further references) are to be “integrated”. +But how? There has been proposed a wide range of solutions: On one side +of the spectrum, textual elements and visual ones are learned separately and +then “combined” afterwards, whereas on the other side, the learning of textual +and visual features takes place simultaneously/jointly. +For example, Silberer and Lapata (2014) implement a model where a one-to- +one correspondence between textual and visual space is assumed. Text and +visual representations are passed to two separate unimodal encoders and both +outputs are then fed to a bimodal autoencoder. On the other side, Bordes +et al. (2020) propose a “text objective function” whose parameters are shared +with an additional “grounded objective function”. The training of the latter +takes place in what the authors called a “grounded space”, which allows to +avoid the one-to-one correspondence between textual and visual space. These +are just introductory examples and between these two approaches there are +many shades of gray (probably even more than fifty ..). These models exhibit +in many instances better performance than pure language models, but they +still struggle on some aspects, for example when they deal with abstract words +and sentences. + +3.0 +85 +FIGURE 3.1: Left: Silberer and Lapata (2014) stack autoencoders to learn +higher-level embeddings from textual and visual modalities, encoded as vectors +of attributes. Right: Bordes et al. (2020) fuse textual and visual information in +an intermediate space denoted as “grounded space”; the “grounding objective +function” is not applied directly on sentence embeddings but trained on this +intermediate space, on which sentence embeddings are projected. +Afterwards, in the subchapter on “Text supporting Image Models”, approaches +where natural language is used as additional supervision for CV models are +described. Intuitively these models should be more powerful compared to +models supervised solely by manually labeled data, simply because there is +much more signal available in the training data. +One prominent example for this is the CLIP model (Radford et al., 2021a) +with its new dataset WIT (WebImageText) comprising 400 million text-image +pairs scraped from the internet. Similar to “Text2Image” the recent success +stories in NLP have inspired most of the new approaches in this field. Most +importantly pre-training methods, which directly learn from raw text (e.g. GPT- +n, Generative Pre-trained Transformer; Brown et al., 2020). So, the acronym +CLIP stands for _C_ontrastive _L_anguage-_I_mage _P_re-training here. +A transformer-like architecture is used for jointly pre-training a text encoder +and an image encoder. For this, the contrastive goal to correctly predict +which natural language text pertains to which image inside a certain batch, is +employed. Training this way turned out to be more efficient than to generate +captions for images. This leads to a flexible model, which at test time uses +the Learned text encoder as a “zero-shot” classifier on embeddings of the +target dataset’s classes. The model, for example, can perform optical character +recognition, geo-location detection and action-recognition. Performance-wise +CLIP can be competitive with task-specific supervised models, while never +seeing an instance of the specific dataset before. This suggests an important +step towards closing the “robustness gap”, where machine learning models fail +to meet the expectations set by their previous performance – especially on +ImageNet test-sets – on new datasets. +Finally, the subchapter “Models for both modalities” discusses how text and +image inputs can be incorporated into a single unifying framework in order to + +Separately +Jointly +reconstruction +0000-00 +w(r +w(c) +00-0 +The teniswoman starts +wo) +on her senve +sotimax i +Ft +The pitcher s +bimodial coding y +W(6) +Eharowinga ball +wo +wi +The woman phys tennis +00-0 +(a)4 +w(2) +input x +-00 +TEXT +IMAGES86 +3 Multimodal architectures +get closer to a general self-supervised learning framework. There are two key +advantages that make such an architecture particularly interesting. Similar to +models mentioned in previous parts, devoid of human labelling, self-supervised +models don’t suffer from the same capacity constraints as regular supervised +learning models. On top of that, while there have been notable advances in +dealing with different modalities using single modality models, it is often un- +clear to which extend a model structure generalizes across different modalities. +Rather than potentially learning modality-specific biases, a general multipur- +pose framework can help increase robustness while also simplifying the learner +portfolio. In order to investigate different challenges and trends in vision-and- +language modelling, this section takes a closer look at three different models, +namely data2vec (Baevski et al. (2022)), VilBert (Lu et al. (2019b)) and +Flamingo (Alayrac et al. (2022)) Data2vec is a new multimodal self-supervised +learning model which uses a single framework to process either speech, natural +language or visual information. This is in contrast to earlier models which +used different algorithms for different modalities. The core idea of data2vec, +developed by MetaAI, is to predict latent representations of the full input data +based on a masked view of the input in a self-distillation setup using a standard +transformer architecture. (Baevski et al. (2022)) As a result, the main improve- +ment is in the framework itself, not the underlying architectures themselves. +For example, the transformer architecture being used follows Vaswani et al. +(2017b). Through their parallelizability, transformers have several advantages +over RNNs/CNNs particularly when large amounts of data are being used, +making them the de-facto standard approach in vision-language modelling. +(Dosovitskiy et al. (2020a)) VilBert is an earlier model that in contrast to +data2vec can handle cross-modality tasks. Finally, Flamingo is a modern few +shot learning model which features 80B parameters - significantly more than +the other two models. Through a large language model incorporated in its +architecture, it has great text generating capabilities to tackle open-ended +tasks. It also poses the question how to efficiently train increasingly large +models and shows the effectiveness of using perceiver architectures (Jaegle +et al. (2021a)) to encode inputs from different modalities as well as how to +leverage communication between pretrained and frozen models. +3.1 +Image2Text +Author: Luyang Chu +Supervisor: Christian Heumann +Image captioning refers to the task of producing descriptive text for given +images. It has stimulated interest in both natural language processing and +computer vision research in recent years. Image captioning is a key task that + +3.1 Image2Text +87 +requires a semantic comprehension of images as well as the capacity to generate +accurate and precise description sentences. +3.1.1 +Microsoft COCO: Common Objects in Context +The uderstanding of visual scenes plays an important role in computer vision +(CV) research. It includes many tasks, such as image classification, object +detection, object localization and semantic scene labeling. Through the CV +research history, high-quality image datasets have played a critical role. They +are not only essential for training and evaluating new algorithms, but also lead +the research to new challenging directions (Lin et al., 2014c). In the early years, +researchers developed Datasets (Deng et al., 2009),(Xiao et al., 2010),(Evering- +ham et al., 2010) which enabled the direct comparison of hundreds of image +recognition algorithms, which led to an early evolution in object recognition. In +the more recent past, ImageNet (Deng et al., 2009), which contains millions of +images, has enabled breakthroughs in both object classification and detection +research using new deep learning algorithms. +With the goal of advancing the state-of-the-art in object recognition, especially +scene understanding, a new large scale data called “Microsoft Common Objects +in Context” (MS COCO) was published in 2014. MS COCO focuses on three +core problems in scene understanding: detecting non-iconic views, detecting the +semantic relationships between objects and determining the precise localization +of objects (Lin et al., 2014c). +The MS COCO data set contains 91 common object categories with a total +of 328,000 images as well as 2,500,000 instance labels. The authors claim, +that all of these images could be recognized by a 4 year old child. 82 of the +categories include more than 5000 labeled instances. These labeled instances +wmay support the detection of relationships between objects in MS COCO. In +order to provide precise localization of object instances, only “Thing” categories +like e.g. car, table, or dog were included. Objects which do not have clear +boundaries like e.g. sky, sea, or grass, were not included. In current object +recognition research, algorithms perform well on images with iconic views. +Images with iconic view are defined as containing the one single object category +of interest in the center of the image. To accomplish the goal of detecting the +contextual relationships between objects, more complex images with multiple +objects or natural images, coming from our daily life, are also gathered for the +data set. +In addition to MS COCO, researchers have been working on the development of +new large databases. In recent years many new large databases like ImageNet, +PASCAL VOC (Everingham et al., 2010) and SUN (Xiao et al., 2010) have +been developed in the field of computer vision. Each of this dataset has its on +specific focus. + +88 +3 Multimodal architectures +Datasets for object recognition can be roughly split into three groups: object +classification, object detection and semantic scene labeling. +Object classification requires binary labels to indicate whether objects are +present in an image, ImageNet (Deng et al., 2009) is clearly distinguishable +from other datasets in terms of the data set size. ImageNet contains 22k +categories with 500-1000 images each.In comparison to other data sets, the +ImageNet data set contains thus over 14 million labeled images with both +entity-level and fine-grained categories by using the WordNet hierarchy and +has enabled significant advances in image classification. +Detecting an object includes two steps: first is to ensure that an object from a +specified class is present, the second step is to localize the object in the image +with a given bounding box. This can be implemented to solve tasks like face +detection or pedestrians detection. The PASCAL VOC (Everingham et al., +2010) data set can be used to help with the detection of basic object categories. +With 20 object categories and over 11,000 images, PASCAL VOC contains +over 27,000 labeled object instances by additionally using bounding boxes. +Almost 7,000 object instances from them come with detailed segmentations +(Lin et al., 2014c). +Labeling semantic objects in a scene requires that each pixel of an image is +labeled with respect to belonging to a category, such as sky, chair, etc., but +individual instances of objects do not need to be segmented (Lin et al., 2014c). +Some objects like sky, grass, street can also be defined and labeled in this way. +The SUN data set (Xiao et al., 2010) combines many of the properties of both +object detection and semantic scene labeling data sets for the task of scene +understanding, it contains 908 scene categories from the WordNet dictionary +(Fellbaum, 2000) with segmented objects. The 3,819 object categories split +them to object detection datasets (person, chair) and to semantic scene labeling +(wall, sky, floor) (Lin et al., 2014c). +3.1.1.1 +Image Collection and Annotation for MS COCO +MS COCO is a large-scale richly annotated data set, the progress of building +consisted of two phases: data collection and image annotation. +In order to select representative object categories for images in MS COCO, +researchers collected several categories from different existing data sets like +PASCAL VOC (Everingham et al., 2010) and other sources. All these object +categories could, according to the authors, be recognized by children between +4 to 8. The quality of the object categories was ensured by co-authors. Co- +authors rated the categories on a scale from 1 to 5 depending on their common +occurrence, practical applicability and diversity from other categories (Lin +et al., 2014c). The final number of categories on their list was 91. All the +categories from PASCAL VOC are included in MS COCO. +With the help of representative object categories, the authors of MS COCO + +3.1 Image2Text +89 +wanted to collect a data set in which a majority of the included images are non- +iconic. All included images can be roughly divided into three types according +to Fig. 3.2: iconic-object images, iconic-scene images and non-iconic images +(Lin et al., 2014c). +FIGURE 3.2: Type of images in the data set (Lin et al., 2014c). +Images are collected through two strategies: firstly images from Flickr, a +platform for photos uploaded by amateur photographers, with their keywords +are collected. Secondly, researchers searched for pairwise combinations of object +categories like “dog + car” to gather more non-iconic images and images with +rich contextual relationships (Lin et al., 2014c). +Due to the scale of the dataset and the high cost of the annotation process, +the design of a high quality annotation pipeline with efficient cost depicted a +difficult task. The annotation pipeline in Fig. 3.3 for MS COCO was split into +three primary tasks: 1. category labeling, 2.instance spotting, and 3. instance +segmentation (Lin et al., 2014c). +FIGURE 3.3: Annotation pipeline for MS COCO (Lin et al., 2014c). +As we can see in the Fig 3.3, object categories in each image were determined +in the first step. Due to the large number of data sets and categories, they +used a hierarchical approach instead of doing binary classification for each +category. All the 91 categories were grouped into 11 super-categories. The + +(a) Iconic object images +(b) Iconic scene images +(c) Non-iconic images +Fig.2: Example of (a)iconic object images, (b)iconic scene images, and (c)non-iconic images.AnnotationPipeline +(a) Category labeling +(b) Instance spotting +(c)Instancesegmentation90 +3 Multimodal architectures +annotator did then examine for each single instance whether it belongs to one +of the given super-categories. Workers only had to label one instance for each +of the super-categories with a category’s icon (Lin et al., 2014c). For each +image, eight workers were asked to label it. This hierarchical approach helped +to reduce the time for labeling. However, the first phase still took 20k worker +hours to be completed. +In the next step, all instances of the object categories in an image were labeled, +at most 10 instances of a given category per image were labeled by each worker. +In both the instance spotting and the instance segmentation steps, the location +of the instance found by a worker in the previous stage could be seen by the +current worker. Each image was labeled again by eight workers summing up +to a total of 10k worker hours. +In the final segmenting stage, each object instance was segmented, the seg- +mentation for other instances and the specification of the object instance by a +worker in the previous stage were again shown to the worker. Segmenting 2.5 +million object instances was an extremely time consuming task which required +over 22 worker hours per 1,000 segmentations. To minimize cost and improve +the quality of segmentation, all workers were required to complete a training +task for each object category. In order to ensure a better quality, an explicit +verification step on each segmented instance was performed as well. +3.1.1.2 +Comparison with other data sets +In recent years, researchers have developed several pre-training data sets and +benchmarks which helped the developemnt of algorithms for CV. Each of these +data sets varies significantly in size, number of categories and types of images. +In the previos part, we also introduced the different research focus of some +data sets like e.g. ImageNet (Deng et al., 2009), PASCAL VOC (Everingham +et al., 2010) and SUN (Xiao et al., 2010). ImageNet, containing millions of +images, has enabled major breakthroughs in both object classification and +detection research using a new class of deep learning algorithms. It was created +with the intention to capture a large number of object categories, many of +which are fine-grained. SUN focuses on labeling scene types and the objects +that commonly occur in them. Finally, PASCAL VOC’s primary application is +in object detection in natural images. MS COCO is designed for the detection +and segmentation of objects occurring in their natural context (Lin et al., +2014c). +With the help of Fig. 3.4, one can compare MS COCO to ImageNet, PASCAL +VOC and SUN with respect to different aspects (Lin et al., 2014c). +The number of instances per category for all 91 categories in MS COCO and +PASCAL VOC is shown in subfigure 3.4 (a). Compared to PASCAL VOC, +MS COCO has both more categories and (on average) more instances per +category. The number of object categories and the number of instances per + +3.1 Image2Text +91 +FIGURE 3.4: Comparison MS COCO with PASCAL VOC, SUN and Ima- +geNet (Lin et al., 2014c). +category for all the datasets is shown in subfigure 3.4 (d). MS COCO has +fewer categories than ImageNet and SUN, but it has the highest average +number of instances per category among all the data sets, which from the +perspective of authors might be useful for learning complex models capable +of precise localization (Lin et al., 2014c). Subfigures 3.4 (b) and (c) show the +number of annotated categories and annotated instances per image for MS +COCO, ImageNet, PASCAL VOC and SUN (average number of categories +and instances are shown in parentheses). On average MS COCO contains 3.5 +categories and 7.7 instances per image. ImageNet and PASCAL VOC both +have on average less than 2 categories and 3 instances per image. The SUN +data set has the most contextual information, on average 9.8 categories and 17 +instances per image. Subfigure 3.4 (e) depicts the distribution of instance sizes +for the MS COCO, ImageNet Detection, PASCAL VOC and SUN data set. +3.1.1.3 +Discussion +MS COCO is a large scale data set for detecting and segmenting objects +found in everyday life, with the aim of improving the state-of-the-art in object + +Instances per category +COCO +1,000,000 +PASCAL VOC +100,000 +10,000 +1,000 +100 +Wine +(a) +Instances per image +Categories per image +80% +60% +70% + COCO (7.7) +—COCO (3.5) +50% +PASCALVOC (2.3) +PASCAL VOC (1.4) +50% +ImageNet (3.0) +ImageNet (1.7) +SUN (17.0) +SUN (9.8) +30% +30% +Per +20% +10% +10% +0% +0% +6 +10 +11 +12 +13 +14 +15 +1 +2 +5 +6 +7 +9101112131415 +2 +4 +7 +8 +9 +Number of categories +Number of instances +(b) +(c) +Instance size +Number of categories vs. number of instances +40% +1000000 +35% +-COCO +Caltech Ped +Instances per category +100000 +COCO +PASCALVOC +10000 +ImageNet +PASCALVOC +ImageNet +ImageNet +Detection +Classification +SUN +1000 + 20% +nt +Caltech 256 +SUN +15% +100 +Caltech 101 +10 +5% +0% +1 +10 +100 +1000 +10000 +100000 +4% +6% +10% +16% +25% +40% +63% +100% +Number of categories +Percent of image size +(d) +(e)92 +3 Multimodal architectures +recognition and scene understanding. It focuses on non-iconic images of objects +in natural environments and contains rich contextual information with many +objects present per image. MS COCO is one of the typically used vision +data sets, which are labor intensive and costly to create. With the vast cost +and over 70,000 worker hours, 2.5 Mio instances were annotated to drive the +advancement of object detection and segmentation algorithms. MS COCO is +still a good benchmark for the field of CV (Lin et al., 2014c). The MS COCO +Team also shows directions for future. For example “stuff” label like “sky”, +“grass”, and “street”, etc, may also be included in the dataset since “stuff” +categories provide significant contextual information for the object detection. +3.1.2 +Models for Image captioning +The image captioning task is generally to describe the visual content of an +image in natural language, so it requires an algorithm to understand and +model the relationships between visual and textual elements, and to generate a +sequence of output words (Cornia et al., 2020). In the last few years, collections +of methods have been proposed for image captioning. Earlier approaches +were based on generations of simple templates, which contained the output +produced from the object detector or attribute predictor (Socher and Fei-fei, +2010), (Yao et al., 2010). With the sequential nature of language, most research +on image captioning has focused on deep learning techniques, using especially +Recurrent Neural Network models (RNNs) (Vinyals et al., 2015), (Karpathy +and Fei-Fei, 2014) or one of their special variants (e.g. LSTMs). Mostly, +RNNs are used for sequence generation as languages models, while visual +information is encoded in the output of a CNN. With the aim of modelling +the relationships between image regions and words, graph convolution neural +networks in the image encoding phase (Yao et al., 2018a) or single-layer +attention mechanisms (Xu et al., 2015) on the image encoding side have +been proposed to incorporate more semantic and spatial relationships between +objects. RNN-based models are widely adopted, however, the model has its +limitation on representation power and due to its sequential nature (Cornia +et al., 2020). Recently, new fully-attentive models, in which the use of self- +attention has replaced the recurrence, have been proposed. New approaches +apply the Transformer architecture (Vaswani et al., 2017d) and BERT (Devlin +et al., 2019) models to solve image captioning tasks. The transformer consists of +an encoder with a stack of self-attention and feed-forward layers, and a decoder +which uses (masked) self-attention on words and cross-attention over the output +of the last encoder layer (Cornia et al., 2020). In some other transformer-based +approaches, a transformer-like encoder was paired with an LSTM decoder, +while the aforementioned approaches have exploited the original transformer +architecture. Others (Herdade et al., 2019) proposed a transformer architecture +for image captioning with the focus on geometric relations between input +objects at the same time. Specifically, additional geometric weights between +object pairs, which is used to scale attention weights, are computed. Similarly, + +3.1 Image2Text +93 +an extension of the attention operator, in which the final attended information +is weighted by a gate guided by the context, was introduced at a similar time +(Huang et al., 2019). +3.1.3 +Meshed-Memory Transformer for Image Captioning +(M2) +Transformer-based architectures have been widely implemented in sequence +modeling tasks like machine translation and language understanding. However, +their applicability for multi-modal tasks like image captioning has still been +largely under-explored (Cornia et al., 2020). +FIGURE 3.5: M 2 Transformer (Cornia et al., 2020). +A novel fully-attentive approach called Meshed-Memory Transformer for Image +Captioning (M 2) was proposed in 2020 (Cornia et al., 2020) with the aim of +improving the design of both the image encoder and the language decoder. +Compared to all previous image captioning models, M 2 (see Fig. 3.5 has +two new novelties: The encoder encodes a multi-level representation of the +relationships between image regions with respect to low-level and high-level +relations, and a-priori knowledge can be learned and modeled by using persistent + +Abaseballplayeris +throwing a ball to +anotherplayer +Encoder +Decoder +Layer 1 +Layer N +个 +Encoder +个 +Layer 2 +Decoder +Layer 2 +Encoder +Decoder +Layer N +Layer 1 +Memory-Augmented Encoding +Meshed Decoding94 +3 Multimodal architectures +memory vectors. The multi-layer architecture exploits both low- and high-level +visual relationships through a learned gating mechanism, which computes the +weight at each level, therefore, a mesh-like connectivity between encoder and +decoder layers is created for the sentence generation process (Cornia et al., +2020). +3.1.3.1 +M 2 Transformer Architecture +FIGURE 3.6: M 2 Transformer Architecture (Cornia et al., 2020). +Fig. 3.6 shows the detailed architecture of M 2 Transformer. It can be divided +into the encoder (left) module and the decoder (right) module, both modules +with multiple layers. Given the input image region X, the image is passed +through the attention and feed forward layers. The relationship between image +regions with a-priori knowledge will be encoded in each encoding layer, the +output of each encoding layers will be read by decoding layers to generate the +caption for image word by word (Cornia et al., 2020). +All interactions between word and image-level features of the input image +X are modeled by using scaled dot-product attention. Attention operates on +vectors of queries q, keys k and values n, and takes a weighted sum of the value +vectors according to a similarity distribution between query and key vectors. +Attention can be defined as follows (Cornia et al., 2020): +Attention(Q, K, V ) = softmax(QKT +√ +d +)V +(3.1) +where Q is a matrix of nq query vectors, K and V both contain nk keys and +values, all the vectors has the same dimensionality, and d is a scaling factor. +3.1.3.1.1 +Memory-Augmented Encoder +For the given image region X, attention can be used to obtain a permutation +invariant encoding of X through the self-attention operations, the operator +from the Transformer can be defined as follows (Cornia et al., 2020): + +X +Y +XN +X +masked self-attention +Q + query +Encoder +Decoder +key +query + key +memory +Encoder +Decoder + value + value +-> +K +Layer 1 +Layer 1 +key +M +cross-attention +cross-attention +Encoder +Decoder +attention +Layer 2 +Layer 2 +memory +FC +FC + value +Encoder +Decoder +Layer N +Layer N +feed-forward +Memory-Augmented Encoder +feed-forward +x +Meshed Decoder +Y3.1 Image2Text +95 +S(X) = Attention(WqX, WkX, WvX) +(3.2) +In this case, queries, keys, and values are linear projections of the input features, +and Wq, Wk, Wv are their learnable weights, they depend solely on the pairwise +similarities between linear projections of the input set X. The self-attention +operator encodes the pairwise relationships inside the input. But self-attention +also has its limitation: a prior knowledge on relationships between image regions +can not be modelled. To overcome the limitation, the authors introduce a +Memory-Augmented Attention operator by extending the keys and values +with additional prior information, which does not depend on image region +X. The additional keys and values are initialized as plain learnable vectors +which can be directly updated via SGD. The operator can be defined as follows +(Cornia et al., 2020): +Mmem(X) = Attention(WqX, K, V ) +K = [WkX, Mk] +V = [WvX, Mv] +(3.3) +Mk and Mv are learnable matrices, with nm rows, [·,·] indicates concatenation. +The additional keys and value could help to retrieve a priori knowledge from +input while keeping the quries unchanged (Cornia et al., 2020). +For the Encoding Layer, a memory-augmented operator d is injected into a +transformer-like layer, the output is fed into a position-wise feed-forward layer +(Cornia et al., 2020): +F(X)i = Uσ(V Xi + b) + c; +(3.4) +Xi indicates the i-th vector of the input set, and F(X)i the i-th vector of the +output. Also, σ(ů) is the ReLU activation function, V and U are learnable +weight matrices, b and c are bias terms (Cornia et al., 2020). +Each component will be complemented by a residual connection and the layer +norm operation. The complete definition of an encoding layer can be finally +written as (Cornia et al., 2020): +Z = AddNorm(Mmem(X)) +˜X = AddNorm(F(Z)) +(3.5) +Finally the Full Encoder has multiple encoder layers in a sequential fashion, +therefore the i-th layer uses the output set computed by layer i − 1, higher +encoding layers can exploit and refine relationships identified by previous layers, +n encoding layers will produce the output ˜X = ( ˜X1 . . . ˜Xn) (Cornia et al., +2020). + +96 +3 Multimodal architectures +3.1.3.1.2 +Meshed Decoder +The decoder depends on both previously generated words and image region +encodings. Meshed Cross-Attention can take advantage of all the encoder +layers to generate captions for the image. On the right side of the Fig. 3.6 the +structure of the meshed decoder is shown. The input sequence vector Y and +the outputs from all encoder layers ˜X are connected by the meshed attention +operator gated through cross-attention. The meshed attention operator can is +formally defined as (Cornia et al., 2020): +Mmesh( ˜X, Y ) = +N +� +i=1 +αiC( ˜ +Xi, Y ) +(3.6) +C(ů, ů) stands for the encoder-decoder cross-attention, it is defined with queries +from decoder, while the keys and values come from the encoder (Cornia et al., +2020). +C( ˜ +Xi, Y ) = Attention(WqY, Wk ˜ +Xi, Wv ˜ +Xi) +(3.7) +αi is a matrix of weights of the same size as the cross-attention results, +αi models both single contribution of each encoder layer and the relative +importance between different layers (Cornia et al., 2020). +αi = σ(Wi[Y, C( ˜ +Xi, Y )] + bi) +(3.8) +The [·,·] indicates concatenation and σ(ů) is the sigmoid activation function +here, Wi is a weight matrix, and bi is a learnable bias vector (Cornia et al., +2020). +In decoder layers the prediction of a word should only depend on the previ- +ously generated word, so the decoder layer comprises a masked self-attention +operation, which means that the operator can only make connections between +queries derived from the t-th element of its input sequence Y with keys and +values from left sub-sequence, i.e. Y≤t. +Simlilar as the encoder layers, the decoder layers also contain a position-wise +feed-forward layer, so the decoder layer can be finally defined as (Cornia et al., +2020): +Z = AddNorm(Mmesh(X, AddNorm(Smask(Y ))) +˜Y = AddNorm(F(Z)), +(3.9) +where Smask indicates a masked self-attention over time (Cornia et al., 2020). +The full decoder with multiple decoder layers takes the input word vectors as +well as the t-th element (and all elements prior to it) of its output sequence + +3.1 Image2Text +97 +to make the prediction for the word at t + 1, conditioned on Y≤t. Finally the +decoder takes a linear projection and a softmax operation, which can be seen +as a probability distribution over all words in the vocabulary (Cornia et al., +2020). +3.1.3.1.3 +Comparison with other models on the MS COCO data sets +The M 2 Transformer was evaluated on MS COCO, which is still one of the +most commonly used test data set for image captioning. Instead of using the +original MS COCO dat set, Cornia et al. (2020) follow the split of MS COCO +provided by Karpathy and Fei-Fei (2014). Karpathy uses 5000 images for +validation, 5000 images for testing and the rest for training. +For model evaluation and comparison, standard metrics for evaluating gener- +ated sequences, like BLEU (Papineni et al., 2002), METEOR (Banerjee and +Lavie, 2005), ROUGE (Lin, 2004), CIDEr (Vedantam et al., 2015), and SPICE +(Anderson et al., 2016), which have been introduced in the second chapter, are +used. +FIGURE 3.7: Comparison of M 2 with Transformer-based alternatives (Cor- +nia et al., 2020) +The transformer architecture in its original configuration with six layers has +been applied for captioning, researchers speculated that specific architectures +might be required for captioning, so variations of the original transformer +are compared with M 2 Transformer. Other variations are a transformer with +three layers and the “Attention on Attention” (AoA) approach (Huang et al., +2019) to the attentive layers, both in the encoder and in the decoder (Cornia +et al., 2020). The second part intends to evaluate the importance of the +meshed connections between encoder and decoder layers. M 2 Transformer +(1 to 1) is a reduced version of the original M 2 Transformer, in which one +encoder layer is connected to only corresponding decoder layer instead of being + +B-1 +B-4 +M +R +c +S +Transformer (w/ 6 layers as in [39]) +79.1 +36.2 +27.7 +56.9 +121.8 +20.9 +Transformer (w/ 3 layers) +79.6 +36.5 +27.8 +57.0 +123.6 +21.1 +Transformer (w/ AoA [14]) +80.3 +38.8 +29.0 +58.4 +129.1 +22.7 +M? Transformerl-to-1 (w/o mem.) +80.5 +38.2 +28.9 +58.2 +128.4 +22.2 +80.3 +38.2 +28.9 +58.2 +129.2 +22.5 +M2 Transformer (w/o mem.) +80.4 +38.3 +29.0 +58.2 +129.4 +22.6 +M2 Transformer (w/ softmax) +80.3 +38.4 +29.1 +58.3 +130.3 +22.5 +M2 Transformer +80.8 +39.1 +29.2 +58.6 +131.2 +22.698 +3 Multimodal architectures +connected to all the decoder layers. As one can see from the Fig. 3.7, the +original Transformer has a 121.8 CIDEr score, which is lower than the reduced +version of M 2 Transformer, showing an improvement to 129.2 CIDEr. With +respect to meshed connectivity, which helps to exploit relationships encoded at +all layers and weights them with a sigmoid gating, one can observe a further +improvement in CIDEr from 129.2 to 131.2. Also the role of memory vectors +and the softmax gating schema for M 2 Transformer are also included in the +table. Eliminating the memory vector leads to a reduction of the performance +by nearly 1 point in CIDEr in both the reduced M 2 Transformer and the +original M 2 Transformer (Cornia et al., 2020). +FIGURE 3.8: Comparison with the state-of-the-art on the “Karpathy” test +split, in single-model setting (Cornia et al., 2020). +Fig 3.8 compares the performance of M 2 Transformer with several recently +proposed models for image captioning. SCST (Rennie et al., 2017) and Up- +Down (Anderson et al., 2018), use attention over the grid of features and +attention over regions. RFNet (?) uses a recurrent fusion network to merge +different CNN features; GCN-LSTM (Yao et al., 2018b) uses a Graph CNN to +exploit pairwise relationships between image regions; SGAE (Yang et al., 2019) +uses scene graphs instead ofauto-encoding. The original AoA-Net (Huang et al., +2019) approach uses attention on attention for encoding image regions and an +LSTM language model. Finally, the ORT (Herdade et al., 2019) uses a plain +transformer and weights attention scores in the region encoder with pairwise +distances between detections (Cornia et al., 2020). +In Fig. 3.8, the M 2 Transformer exceeds all other models on BLEU-4, ME- +TEOR, and CIDEr. The performance of the M 2 Transformer was very close +and competitive with SGAE on BLEU-1 and with ORT with respect to SPICE. + +B-1 +B-4 +M +R +c +S +SCST [33] +34.2 +26.7 +55.7 +114.0 +Up-Down [4] +79.8 +36.3 +27.7 +56.9 +120.1 +21.4 +RFNet [15] +79.1 +36.5 +27.7 +57.3 +121.9 +21.2 +Up-Down+HIP [49] +38.2 +28.4 +58.3 +127.2 +21.9 +- +GCN-LSTM [48] +80.5 +38.2 +28.5 +58.3 +127.6 +22.0 +SGAE [46] +80.8 +38.4 +28.4 +58.6 +127.8 +22.1 +ORT [13] +80.5 +38.6 +28.7 +58.4 +128.3 +22.6 +AoANet [14] +80.2 +38.9 +29.2 +58.8 +129.8 +22.4 +M? Transformer +80.8 +39.1 +29.2 +58.6 +131.2 +22.63.2 Image2Text +99 +FIGURE 3.9: Examples of captions generated by M 2 Transformer and the +original Transformer model, as well as the corresponding ground-truths (Cornia +et al., 2020). +Fig. 3.9 shows some examples of captions generated by M 2 Transformer and +the original transformer model, as well as the corresponding ground-truths. +According to the selected examples of captions, M 2 Transformer shows the +ability to generate more accurate descriptions of the images, and the approach +could detect the more detailed relationships between image regions (Cornia +et al., 2020). +The M 2 Transformer is a new transformer-based architecture for image cap- +tioning. It improves the image encoding by learning a multi-level representation +of the relationships between image regions while exploiting a priori knowledge +from each encoder layer, and uses a mesh-like connectivity at decoding stage +to exploit low- and high-level features at the language generation steps. The +results of model evaluation with MS COCO shows that the performance of +the M 2 Transformer approach surpasses most of the recent approaches and +achieves a new state of the art on MS COCO (Cornia et al., 2020). + +GT:Aman milking a brown and white cow in +GT: A man in a red Santa hat and a dog pose +barn. +infrontofaChristmastree. +Transformer:Aman is standingnexttoa +Transformer:AChristmastreeinthesnow +cow. +with a Christmas tree. +M2Transformer:Aman ismilkinga cowin +M?Transformer:Aman wearing a Santa hat +a barn. +with a dog in front of a Christmas tree. +GT: A woman withblue hair and a yellow um- +GT: Several people standing outside a parked +brella. +white van. +Transformer:A woman is holding an um- +Transformer: A group of people standing out- +brella. +side of abus. +M2Transformer:Awomanwithbluehair +M2 Transformer: A group of people stand- +holdingayellowumbrella. +ing around awhitevan. +GT: Several zebras and other animals grazing +GT: A truck sitting on a field with kites in the +in a field. +air. +Transformer: A herd of zebras are standing in +Transformer: A group of cars parked in a field +a field. +witha kite. +M2Transformer:A herd of zebras and other +M2Transformer:Awhite truck isparked in +animals grazing in a field. +a field withkites. +GT: A woman who is skateboarding down the +GT:Orangecatwalkingacrosstwored suit +street. +casesstackedonfloor. +Transformer:Awomanwalkingdowna +Transformer:An orange cat sitting on top of +street talking on a cell phone. +a suitcase. +M2 Transformer: A woman standing on a +M2Transformer:An orange cat standingon +skateboard on a street. +top of two red suitcases. +GT: Some people are standing in front of a red +GT: A boat parked in a field with long green +food truck. +Transformer:Agroup ofpeople standing in +grass. +Transformer: A field of grass with a fence. +front ofabus. +M2Transformer:Agroup ofpeoplestand- +M2Transformer:Aboatinthemiddleofa +field of grass. +ing outside of a food truck.100 +3 Multimodal architectures +3.2 +Text2Image +Author: Karol Urbańczyk +Supervisor: Jann Goschenhofer +Have you ever wondered what a painting artist could paint for you if you +ordered a high-quality oil painting of a psychedelic hamster dragon? Probably +not. Nevertheless, one of the answers could be: +FIGURE 3.10: Hamster dragon +The catch is that there is no human artist. The above picture comes from a +3.5-billion parameter model called GLIDE by OpenAI (Nichol et al., 2021b). +Every single value of every pixel was generated from a distribution that the +model had to learn in the first place. Before generating the image, GLIDE +abstracted the concepts of ‘hamster’ and ‘dragon’ from looking at millions of +training images. Only then, it was able to create and combine them successfully +into a meaningful visual representation. Welcome to the world of current text- +to-image modelling! +The cross-modal field of text-to-image models has developed significantly +over recent years. What was considered unimaginable only a few years ago, +today constitutes a new benchmark for researchers. New breakthroughs are +being published every couple of months. Following these, possible business use +cases are emerging, which attracts investment from the greatest players in AI +research. However, a further trend of closed-source models is continuing and +the text-to-image field is probably one the most obvious ones where it can be +noticed. We might need to get used to the fact that the greatest capabilities +will soon be monopolized by few companies. +At the same time, the general public is becoming aware of the field itself +and the disruption potential it brings. Crucial questions are already emerging. + +3.2 Text2Image +101 +What constitutes art? What does the concept of being an author mean? The +result of a generative model is in a sense a combination, or variation, of the +abstracts it has seen in the past. But the same stands for a human author. +Therefore, is a discussion about the prejudices and biases needed? Answers to +all of these will require refinement through an extensive discussion. The last +section of this chapter will try to highlight the most important factors that +will need to be considered. +However, the primary intention of this chapter is to present the reader with +a perspective on how the field was developing chronologically. Starting with +the introduction of GANs, through the first cross-domain models, and ending +with state-of-the-art achievements (as of September 2022), it will also try to +grasp the most important concepts without being afraid of making technical +deep dives. +The author is aware that since the rapid development pace makes it nearly +impossible for this section to stay up-to-date, it might very soon not be fully +covering the field. However, it must be stressed that the cutting-edge capabilities +of the recent models tend to come from the scale and software engineering +tricks. Therefore, focusing on the core concepts should hopefully gives this +chapter a universal character, at least for some time. This design choice also +explains why many important works did not make it to this publication. Just +to name a few of them as honorable mentions: GAWWN (Reed et al., 2016a), +MirrorGAN (Qiao et al., 2019), or most recent ones: LAFITE (Zhou et al., +2021), Make-a-Scene (Gafni et al., 2022) or CogView (Ding et al., 2021). In +one way or another, all of them pushed the research frontier one step further. +Therefore, it needs to be clearly stated - the final selection of this chapter’s +content is a purely subjective decision of the author. +3.2.1 +Seeking objectivity +Before diving into particular models, we introduce objective evaluation proce- +dures that help assess the performance of consecutive works in comparison to +their predecessors. Unfortunately, objectivity in comparing generative models +is very hard to capture since there is no straight way to draw deterministic +conclusions about the model’s performance (Theis et al., 2015). However, +multiple quantitative and qualitative techniques have been developed to make +up for it. Unfortunately, there is no general consensus as to which measures +should be used. An extensive comparison has been performed by Borji (2018). +A few of the most widely used ones in current research are presented below. +Inception Score (IS) +Introduced by Salimans et al. (2016), Inception Score (IS) uses the Inception +Net (Szegedy et al., 2015) trained on ImageNet data to classify the fake images +generated by the assessed model. Then, it measures the average KL diver- + +102 +3 Multimodal architectures +gence between the marginal label distribution p(y) and the label distribution +conditioned on the generated samples p(y|x). +exp(E +x[KL(p(y|x)||p(y))]) +p(y) is desired to have high diversity (entropy), in other words: images from the +generative model should represent a wide variety of classes. On the other hand, +p(y|x) is desired to have low diversity, meaning that images should represent +meaningful concepts. If a range of cat images is being generated, they all +should be confidently classified by Inception Net as cats. The intention behind +IS is that a generative model with a higher distance (KL divergence in this +case) between these distributions should have a better score. IS is considered +a metric that correlates well with human judgment, hence its popularity. +Fréchet Inception Distance (FID) +A metric that is generally considered to improve upon Inception Score is the +Fréchet Inception Distance (FID). Heusel et al. (2017) argue that the main +drawback of IS is that it is not considering the real data at all. Therefore, +FID again uses Inception Net, however this time it embeds the images (both +fake and real samples) into feature space, stopping at a specific layer. In other +words, some of the ultimate layers of the network are being discarded. Feature +vectors are then assumed to follow a Gaussian distribution and the Fréchet +distance is calculated between real and generated data distributions: +d2((m, C), (mw, Cw)) = ||m − mw||2 +2 + Tr(C + Cw − 2(CCw)1/2) +where (m, C) and (mw, Cw) represent mean and covariance of generated and +real data Gaussians respectively. Obviously, low FID levels are desired. +FID is considered to be consistent with human judgement and sensitive to +image distortions, which are both desired properties. Figure 3.11 shows how +FID increases (worsens) for different types of noise being added to images. +Precision / Recall +Precision and recall are one of the most widely used metrics in many Machine +Learning problem formulations. However, their classic definition cannot be +applied to generative models due to the lack of objective labels. Sajjadi et al. +(2018) came up with a novel definition of these metrics calculated directly from +distributions, which was further improved by Kynkäänniemi et al. (2019). The +argument behind the need for such an approach is that metrics such as IS or +FID provide only a one-dimensional view of the model’s performance, ignoring +the trade-off between precision and recall. A decent FID result might very well +mean high recall (large variation, i.e. wide range of data represented by the +model), high precision (realistic images), or anything in between. +Let Pr denote the probability distribution of the real data, and Pg be the + +3.2 Text2Image +103 +FIGURE 3.11: FID is evaluated for different noise types. From upper left to +lower right: Gaussian noise, Gaussian blur, implanted black rectangles, swirled +images, salt and pepper, CelebA dataset contaminated by ImageNet images. +Figure from Heusel et al. (2017). +distribution of the generated data. In short, recall measures to which extend +Pr can be generated from Pg, while precision is trying to grasp how many +generated images fall within Pr. +FIGURE 3.12: Definition of precision and recall for distributions. Figure +from Kynkäänniemi et al. (2019). +See Kynkäänniemi et al. (2019) for a more thorough explanation. +CLIP score +CLIP is a model from OpenAI [CLIP2021] which is explained in detail in the +chapter about text-supporting computer vision models. In principle, CLIP is +capable of assessing the semantic similarity between the text caption and the +image. Following this rationale, the CLIP score can be used as metric and is +defined as: +E[s(f(image) ∗ g(caption))] + +400 +400 +400 +350 +350 +350 +300 +00 +00 +250 +250 - +250 + 00 +150 +150 - +150 +100 +100 +100 +50 +50 - +50 +3 +2 +disturbance level +disturbance level +disturbance level +250 +300 +600 +200 +250 +500 - +200 +150 +400 + 150 +100 +200 +100 +50 +100 +50 +2 +disturbance level +disturbance level +disturbance levelP +(a)Exampledistributions +(b) Precision +(c) Recall104 +3 Multimodal architectures +where the expectation is taken over the batch of generated images and s is the +CLIP logit scale (Nichol et al., 2021b). +Human evaluations +It is common that researchers report also qualitative measures. Many potential +applications of the models are focused on deceiving the human spectator, +which motivates reporting of metrics that are based on human evaluation. The +general concept of these evaluation is to test for: +• photorealism +• caption similarity (image-text alignment) +Usually, a set of images is presented to a human, whose task is to assess their +quality with respect to the two above-mentioned criteria. +3.2.2 +Generative Adversarial Networks +The appearance of Generative Adversarial Networks (GAN) was a major +milestone in the development of generative models. Introduced by Goodfellow +et al. (2014c), the idea of GANs presented a novel architecture and training +regime, which corresponds to a minimax two-player game between a Generator +and a Discriminator (hence the word adversarial). +GANs can be considered as an initial enabler for the field of text-to-image +models and for a long time, GAN-like models were achieving state-of-the-art +results, hence the presentation of their core concepts in this chapter +3.2.2.1 +Vanilla GAN for Image Generation +In a vanilla GAN, the Generator model (G) and Discriminator model (D) +are optimized together in a minimax game, where G aims at generating a +sample so convincing, that D will not be able to distinguish whether it comes +from a real or generated image distribution. On the other hand, D is being +trained to discriminate between the two. Originally, a multilayer perceptron +was proposed as a model architecture for both D and G, although in theory +any differentiable function could be used. +More formally, let pz denote the prior distribution defined on the input noise +vector z. Then, the generator G(z) represents a function that is mapping this +noisy random input to the generated image x. The discriminator D(x) outputs a +probability that x comes from the real data rather than generator’s distribution +pg. In this framework, D shall maximize the probability of guessing the correct +label of both real and fake data. G is trained to minimize log(1 − D(G(z))). +Now, such representation corresponds to the following value function (optimal +solution): +min +G min +D V (D, G) = +E +x∼pdata(x)[log(D(x))] + +E +z∼pz(z)[log(1 − D(G(z)))] + +3.2 Text2Image +105 +Figure 3.13 depicts this process in a visual way. +FIGURE 3.13: GAN framework as proposed in Goodfellow et al. (2014c). +Some of the generated samples that had been achieved with this architecture +already in 2014 can be seen in Figure 3.14. +FIGURE 3.14: Samples from generators trained on different datasets: a) +MNIST b) TFD, c) CIFAR-10 (MLP used for G and D) d) CIFAR-10 (CNN +used). Highlighted columns show the nearest real example of the neighbouring +sample. Figure from Goodfellow et al. (2014c). +3.2.2.2 +Conditioning on Text +So far, only image generation has been covered, completely ignoring textual +input. Reed et al. (2016c) introduced an interesting concept of conditioning +DC-GAN (GAN with CNNs as Generator and Discriminator) on textual +embeddings. A separate model is being trained and used for encoding the text. +Then, result embeddings are concatenated with the noise vector and fed into +the Generator and the Discriminator takes embeddings as an input as well. +The resulting model is referred to as GAN-INT-CLS. Both abbreviations (INT + +Back Propagation: Maximize Error +Latent +Generator +Generator +Space +Image +Real +Discriminator +or +Fake +Real +Dataset +Back Propagation: Minimize Error +Imageb) +C106 +3 Multimodal architectures +and CLS) stand for specific training choices, which are going to be explained +later in the chapter. The overview of the proposed architecture can be seen in +Figure 3.15. +FIGURE 3.15: The proposed architecture of the convolutional GAN that is +conditioned on text. Text encoding ϕ(t) is fed into both the Generator and +the Discriminator. Before further convolutional processing, it is first projected +to lower dimensionality in fully-connected layers and concatenated with image +feature maps. Figure from Reed et al. (2016c). +Text embeddings +Since regular text embeddings are commonly trained in separation from visual +modality simply by looking at the textual context, they are not well suited for +capturing visual properties. This motivated Reed et al. (2016b) to come up with +structured joint embeddings of images and text descriptions. GAN-INT-CLS +implements it in a way described in Figure 3.16. +FIGURE 3.16: Figure from Reed et al. (2016c). + +This flower has small, round violet +This flower has small, round violet +petals with a dark purple center +:=G(z,(t) +petals with a dark purple center +()0) +z~N(0, 1) +D(,p(t) +Generator Network +Discriminator NetworkThetextclassifierinducedbythelearned correspondence +function ft is trained by optimizing the following struc- +tured loss: +N +(2) +n=1 +where ((un, tn, yn) : n = 1, .., N) is the training data set, + is the 0-1 loss, n are the images, tn are the correspond- +ing text descriptions, and yn are the class labels. Classifiers +fu and ft are parametrized as follows: +f(u) = arg max Et~T(y)[Φ(u)T p(t)] +(3) +ft(t) = arg max Ey~v(y) [Φ(u)T (t)) +(4) +where@istheimageencoder(e.g.adeepconvolutional +neuralnetwork),isthetextencoder(e.g.acharacter- +level CNN or LSTM), T(y) is the set of text descriptions +of class y and likewiseV(y)for images.The intuition here +is that a text encoding should have a higher compatibility +scorewithimagesofthecorrespondongclasscomparedto +any other class and vice-versa.3.2 Text2Image +107 +GoogLeNet is being used as an image encoder φ. For text encoding ϕ(t), authors +use a character-level CNN combined with RNN. Essentially, the objective of +the training is to minimize the distance between the encoded image and text +representations. The image encoder is then discarded and ϕ only is used as +depicted in Figure 3.15. +GAN-CLS +CLS stands for Conditional Latent Space, which essentially means the GAN +is conditioned on the embedded text. However, in order to fully grasp how +exactly the model is conditioned on the input, we need to go beyond architec- +tural choices. It is also crucial to present a specific training regime that was +introduced for GAN-CLS and the motivation behind it. +One way to train the system is to view text-image pairs as joint observations +and train the discriminator to classify the entire pair as real or fake. However, +in such a case the discriminator does not have an understanding of whether +the image matches the meaning of the text. This is because the discriminator +does not distinguish between two types of error that exist, namely when the +image is unrealistic or when it is realistic but the text does not match. +A proposed solution to this problem is to present the discriminator with three +observations at a time, all of which are included later in the loss function. +These three are: {real image with right text}, {real image with wrong text}, +{fake image with right text}. The intention is that the discriminator should +classify them as {true}, {false}, {false}, respectively. +GAN-INT +The motivation behind this concept comes from the fact that interpolating +between text embeddings tends to create observation pairs that are still close +to the real data manifold. Therefore, generating additional synthetic text +embeddings and using them instead of real captions in the training process +might help in the sense that it works as a form of data augmentation and helps +regularize the training process. Figure 3.17 might be helpful for developing the +intuition behind the interpolation process. +Results +The model achieves the best performance when both of the mentioned methods +are in use (GAN-INT-CLS). Models prove to successfully transfer style (pose +of the objects) and background from the training data when trained on CUB +(birds) and Oxford-102 (flowers) datasets. They also show interesting zero-shot +abilities, meaning they can generate observations from unseen test classes +(Figure 3.18). When trained on MS-COCO, GAN-CLS proves its potential to +generalize over many domains, although the results are not always coherent +(Figure 3.19). + +108 +3 Multimodal architectures +FIGURE 3.17: Interpolating between sentences. Figure from Reed et al. +(2016c). +FIGURE 3.18: Zero-shot generated birds using GAN, GAN-CLS, GAN-INT, +GAN-INT-CLS. Figure from Reed et al. (2016c). +3.2.2.3 +Further GAN-like development +Generative Adversarial Networks were a leading approach for text-to-image +models for most of the field’s short history. In the following years after the +introduction of GAN-INT-CLS, new concepts were emerging, trying to push +the results further. Many of them had a GAN architecture as their core part. +In this section, a few such ideas are presented. The intention is to quickly +skim through the most important ones. A curious reader should follow the +corresponding papers. +StackGAN + +Blue bird with black beak' - +'Red bird with black beak +Small blue bird with black wings' -→ +Small yellow bird with black wings +This bird is bright.' → ‘This bird is dark.'an all black bird +this small bird has +tiny beak, tarsus and +a tiny bird, with + yellow breast, +shades of brown all +GT with a distinct +brown crown, and +fect, a blue crown, +over with white and +light grey head and +thick.rounded bill +ercilian +blue coverts, and +black check patcl +bsad and back +GAN +GAN-CLS +GAN-INT +GAN-INT +CLS3.2 Text2Image +109 +FIGURE 3.19: Generated images using GAN-CLS on MS-COCO validation +set. Figure from Reed et al. (2016c). +Zhang et al. (2016a) introduced what the StackGAN. The main contribution of +the paper which also found its place in other researchers’ works, was the idea +to stack more than one generator-discriminator pair inside the architecture. +Stage-II (second pair) generator is supposed to improve the results from Stage-I, +taking into account only: +• text embedding (same as Stage-I) +• image generated in Stage-I +without a random vector. Deliberate omission of the random vector results +in the generator directly working on improving the results from Stage-I. The +purpose is also to increase resolution (here from 64x64 to 256x256). Authors +obtained great results already with two stages, however, in principle architecture +allows for stacking many of them. +FIGURE 3.20: (ref:stackgan) +AttnGAN +It is 2017 and many researchers believe attention is all they need (Vaswani + +GT +ours +GT +ours +GT +Ours +Jodnou6e +amaninawet +suit riding a +apitcher is +people on skis +surfboard on a +Moun oninoqe +stand on the +wave. +the ball to the +snow. +batter. +a table with +twoplatesot +many plates of +lood that include +apicture ofa +pue pooj +beans, +veryclean +drinks +guacamole and +living room. +rice. +two giraffe +standing next +agreenplant +that is growing +a sheep +to each other +standing in a +in a forest. +out of the +ground. +opengrass +field. +alarge blue +octopus kite +flies above +there is only one +atoilet in a small +the people +horse in the +room with a +grassy field +window and +having fun at +the beach. +unfinished wallsThis bird is +The bird has +This is a small, +This bird is +This bird is +This bird has +A white bird +white, black, +small beak, +black bird with +white black and +Text +bluewithwhite +wings that are +with a black +and brown in +with reddish +a white breast +yellow in color, +description and has a very +brownand has +crown and +color, with a +browncrown +and white on +with a short +short beak +a yellow belly +yellow beak +brown beak +and gray belly +the wingbars. +black beak +Stage-I +images +Stage-II +images110 +3 Multimodal architectures +et al., 2017e). Probably for the first time in text-to-image generation attention +mechanism was used by Xu et al. (2017). The authors combined the idea with +what StackGAN proposed and used three stages (generators G0, G1 and G2). +However, this time first layers of a particular generator are attending to word +feature vectors. This mechanism not only helps control how particular areas of +the image are being improved by consecutive generators but also allows for +visualizing attention maps. +FIGURE 3.21: Images generated by G0, G1, G2. Two bottom rows show 5 +most attended words by G1 and G2 respectively. Figure from Xu et al. (2017). +DM-GAN +Another important milestone was DM-GAN (Dynamic Memory GAN) (Zhu +et al., 2019). At that time, models were primarily focusing on generating the +initial image and then refining it to a high-resolution one (as e.g. StackGAN +does). However, such models heavily depend on the quality of the first image +initialization. This problem was the main motivation for the authors to come +up with a mechanism to prevent it. DM-GAN proposes a dynamic memory +module, which has two main components. First, its memory writing gate helps +select the most important information from the text based on the initial image. +Second, a response gate merges the information from image features with the +memories. Both of these help refine the initial image much more effectively. +DF-GAN + +this bird has a green crown black primaries and a white belly +1:bird +O:this +2:has +11:belly +10:white +6:black +4:green +10:white +O:this +1:bird3.2 Text2Image +111 +Last but not least, DF-GAN (Deep Fusion GAN) (Tao et al., 2020) improves +the results by proposing three concepts. One-Stage Text-to-Image Backbone +focuses on providing an architecture that is capable of abandoning the idea of +multiple stacked generators and using a single one instead. It achieves that +by a smart combination of a couple of factors, i.a. hinge loss and the use of +residual blocks. Additionally, Matching-Aware Gradient Penalty helps achieve +high semantic consistency between text and image and regularizes the learning +process. Finally, One-Way Output helps the process converge more effectively. +3.2.3 +Dall-E 1 +OpenAI’s Dall-E undoubtedly took the text-to-image field to another level. +For the first time, a model showed great zero-shot capabilities, comparable to +previous domain-specific models. To achieve that, an unprecedented scale of +the dataset and training process was needed. 250 million text-image pairs were +collected for that purpose, which enabled training of a 12-billion parameter +version of the model. Unfortunately, Dall-E is not publicly available and follows +the most recent trend of closed-source models. Or, to put it more precisely, it +started this trend, and GLIDE, Dall-E 2, Imagen, Parti and others followed. +Nevertheless, Dall-E’s inner workings are described in Ramesh et al. (2021b) +and this section will try to explain its most important parts. However, before +that, it is crucial to understand one of the fundamental concepts that has been +around in the field of generative models for already quite some time - namely +Variational Autoencoders. +Variational Autoencoder (VAE) +The regular Autoencoder architecture aims at finding an identity function +that is capable of finding a meaningful representation of the data in lower- +dimensional space and then reconstructing it. It is considered an unsupervised +learning method for dimensionality reduction, however, trained in a supervised +regime with the data itself being the label. The component performing the +reduction is called an encoder, while the part responsible for the reconstruction +is called a decoder. The idea behind Variational Autoencoder (Kingma and +Welling, 2013) is similar, however, instead of learning the mapping to a static +low-dimensional vector, the model learns its distribution. This design equips +the decoder part with desired generative capabilities, as sampling from the +latent low-dimensional space will result in varying data being generated. The +architecture is depicted in Figure 3.22. +qφ(z|x) denotes the encoder under the assumption that z comes from multivari- +ate Gaussian. µ and σ are being learned. Reconstruction process is modelled +by conditional probability pθ(x|z), given samples latent vector z. +VQ-VAE / dVAE +The VQ-VAE (Vector Quantized VAE) (van den Oord et al., 2017) differs from + +112 +3 Multimodal architectures +FIGURE 3.22: Variational (probabilistic) Autoencoder architecture. Figure +from Weng (2018). +the regular VAE in the way it approaches encoding the latent space. Instead +of mapping data into a continuous distribution, the Vector Quantized version +does it in a discrete way. This is motivated by the fact that for many data +modalities it is more natural to represent them in a discrete way (e.g. speech, +human language, reasoning about objects in images, etc.). VQ-VAE achieves +that by using a separate codebook of vectors. The architecture is depicted in +Figure 3.23. +FIGURE 3.23: VQ-VAE architecture. Figure from van den Oord et al. (2017). +The idea is to map the output of the encoder to one of the vectors from the +K-dimensional codebook. This process is called quantization and essentially +means finding the vector that is the nearest neighbour to the encoder’s output +(in a sense of Euclidean distance). Since this moment, this newly found vector +from the codebook is going to be used instead. The codebook itself is also +subject to the learning process. One could argue that passing gradients during +the training through such a discrete system might be problematic. VQ-VAE +overcomes this problem by simply copying gradients from the decoder’s input + +Reconstructed +Input +ldeally they are identical. +input +x~x +Probabilistic Encoder +q(zx) +Mean +Sampled +μ +latent vector +Probabilistic +x +Z +Decoder +pe(x|z) +Std. dev +An compressed low dimensiona +z=μ+OE +representation of the input +E ~ N(O, I)e,e,e3 +Embedding +000 +Space +(X) +Z.(x) +q(z|x) +CNN +CNN +p(x(z.) +(xz)b ~ (x)°z +Z.(x) +Z.(x) +53 +Encoder +Decoder3.2 Text2Image +113 +to the encoder’s output. A great explanation of the training process and further +mathematical details can be found in Weng (2018) and Snell (2021). +Dall-E, however, is using what is called dVAE. Essentially, it is a VQ-VAE +with a couple of details changed. In short, the main difference is that instead of +learning a deterministic mapping from the encoder’s output to the codebook, +it produces probabilities of a latent representation over all codebook vectors. +Dall-E system +Dall-E is composed of two stages. The above introduction of VQ-VAE was +necessary to understand the first one. Essentially, it is training dVAE to +compress 256x256 images into a 32x32 grid of tokens. This model will play a +crucial role in the second stage. +The second stage is about learning the prior distribution of text-image pairs. +First, the text is byte-pair (Sennrich et al., 2015a) encoded into a maximum +of 256 tokens, where the vocabulary is of size 16384. Next, the image represen- +tation encoded by previously trained dVAE is unrolled (from 32x32 grid to +1024 tokens) and concatenated to the text tokens. This sequence (of 256+1024 +tokens) is used as an input for a huge transformer-like architecture. Its goal is +to autoregressively model the next token prediction. +During inference time, the text caption is again encoded into 256 tokens +at most. The generation process starts with predicting all of the next 1024 +image-related tokens. They are later decoded with the dVAE decoder that was +trained in the first step. Its output represents the final image. +Results +Results achieved with the original Dall-E attracted so much attention mainly +due to its diversity and zero-shot capabilities. Dall-E was capable of producing +better results compared to previous state-of-the-art models which were trained +on data coming from the same domain as data used for evaluation. One +comparison can be seen in Figure 3.24. +Outputs of some of the prior approaches described in this chapter compared +with Dall-E can be seen in Figure 3.25. +Limitations +Although Dall-E made a huge step forward in text-to-image modelling, it still +showed multiple flaws. First, photorealism of the outputs is still relatively low. +In other words, when prompted for images containing realistic situations, it is +rarely capable of deceiving human evaluators. Second, the model has evident +problems with understanding relatively complex abstractions, such as text +inside an image, or relative object positions in the scene. + +114 +3 Multimodal architectures +FIGURE 3.24: Human evaluation of Dall-E vs DF-GAN on text captions +from the MS-COCO dataset. When asked for realism and caption similarity, +evaluators preferred Dall-E’s results over 90\% of the time. Figure from Ramesh +et al. (2021b). +FIGURE 3.25: Comparison of the results from Dall-E vs prior works on +MS-COCO. Dall-E’s outputs are chosen as the best out of 512 images, ranked +by a contrastive model. Figure from Ramesh et al. (2021b). + +100% +%0'06 +93.3% +Number of Votes +0/5 +1/5 +75% +2/5 +3/5 +4/5 +5/5 +50% +Majority vote +25% +10.0% +6.6% +0% +DF-GAN +Ours +DF-GAN +Ours +Realism +Accuracy3.2 Text2Image +115 +3.2.4 +GLIDE +Introduced by Nichol et al. (2021b), GLIDE started an era of huge-scale +diffusion models. The concept of diffusion has already been used in the area +of Deep Learning for some time before. However, the authors of GLIDE took +a step further and combined it together with text-based guidance which is +supposed to steer the learning process in the direction of the text’s meaning. +This powerful method was proven to achieve outstanding results which remain +competitive with current state-of-the-art models at the time of writing. +Diffusion models +Before understanding the inner workings of GLIDE, it is important to introduce +the core concept that is driving it, namely diffusion. The idea of diffusion +originates from physics. In short, it corresponds to the process of diffusing +particles, for example of one fluid in another. Normally it has a unidirectional +character, in other words, it cannot be reversed. However, as Sohl-Dickstein +et al. (2015) managed to show, and Ho et al. (2020a) later improved, if the +data diffusion process is modelled as a Markov chain with Gaussian noise +being added in consecutive steps, it is possible to learn how to reverse it. This +reversed process is exactly how images are generated by the model from pure +random noise. +Let us construct a Markov chain, where the initial data point is denoted by x0. +In t steps, Gaussian noise is added to the data. The distribution of the data at +t-step can be characterized in the following way: +q(xt|xt−1) := N(xt; √αtxt−1, (1 − αt)I) +where (1 − αt) parametrizes the magnitude of the noise being added at each +step. Now, if xt−1 was to be reconstructed from xt, a model needs to learn +to predict estimates of gradients from the previous steps. The probability +distribution of previous steps can be estimated as follows: +pθ(xt−1|xt) = N(xt−1; µθ(xt), Σθ(xt)) +where the mean function µθ was proposed by Ho et al. (2020a). For a more +detailed explanation of how this is later parametrized and trained, one could +follow Weng (2021). +GLIDE system +GLIDE can essentially be broken down into two parts. The first of them is the +pretrained Transformer model, which in principle is responsible for creating the +text embeddings. The last token embedding is used as a class embedding (text +representation) in later stages. Additionally, all tokens from the last embedding +layer are being used (attended to) by all attention layers in the diffusion model + +116 +3 Multimodal architectures +itself. This makes the model aware of the text meaning while reconstructing +the previous step in the Markov chain. +The second component of the GLIDE is the diffusion model itself. A U-Net- +like architecture with multiple attention blocks is used here. This part’s sole +goal is to model pθ(xt−1|xt, y), where y corresponds to last token embedding +mentioned above. Or, to put it differently, to predict ϵθ(xt|y) since the problem +can be reframed as calculating the amount of noise being added at each step. +Additionally, to make the model even more aware of the text’s meaning, +guidance is being used at inference time. In short, the idea is to control the +direction of the diffusion process. The authors test two different approaches. +First, they try guidance with the use of a separate classifier, OpenAI’s CLIP in +this case. However, better results were in general achieved by the classifier-free +guidance process. The idea is to produce two different images at each step. +One is conditioned on text, while the other one is not. Distance between them +is calculated and then, after significant scaling, added to the image obtained +without conditioning. This way, the model speeds up the progression of the +image towards the meaning of the text. This process can be written as: +ˆϵθ(xt|y) = ϵθ(xt|∅) + s ∗ (ϵθ(xt|y) − ϵθ(xt|∅)) +where s denotes the parameter for scaling the difference between the mentioned +images. +Results +GLIDE achieves significantly more photorealistic results compared to its +predecessors. FID scores reported on the MS-COCO 256x256 dataset can +be seen in Figure 3.26. It is worth noting that GLIDE was not trained on this +dataset, hence its zero-shot capabilities are even more impressing. +FIGURE 3.26: Comparison of FID on MS-COCO 256×256. Figure from +Nichol et al. (2021b). +Results are also preferred by human evaluators in terms of photorealism and + +Model +FID +Zero-shot FID +AttnGAN (Xu et al., 2017) +35.49 +DM-GAN (Zhu et al., 2019) +32.64 +DF-GAN (Tao et al., 2020) +21.42 +DM-GAN + CL (Ye et al., 2021) +20.79 +XMC-GAN (Zhang et al., 2021) +9.33 +LAFITE (Zhou et al., 2021) +8.12 +DALL-E (Ramesh et al., 2021) +~ 28 +LAFITE (Zhou et al., 2021) +26.94 +GLIDE +12.24 +GLIDE (Validation filtered) +12.893.2 Text2Image +117 +the similarity of the image to its caption. A comparison to DALL-E 1 results +can be seen in Figure 3.27 +FIGURE 3.27: Win probabilities of GLIDE vs DALL-E. Figure from Nichol +et al. (2021b). +Finally, some of the cherry-picked images together with their corresponding +captions can be seen in Figure 3.28. +FIGURE 3.28: Samples from GLIDE with classifier-free-guidance and s=3. +Figure from Nichol et al. (2021b). +Limitations +GLIDE suffers from two problems. First, it fails when being presented with +a complex or unusual text prompt. A few examples can be seen in Figure +3.29. Also, the model is relatively slow at inference time (much slower than +GANs). This is caused by the sequential character of the architecture, where +consecutive steps in Markov chain reconstruction cannot be simply parallelized. +3.2.5 +Dall-E 2 / unCLIP +The contribution that probably attracted the most attention in the field is +known under the name Dall-E 2 (Ramesh et al., 2022a). For the first time, +the wider public had picked interest in its potential applications. This might +be due to a great PR that could be seen from the authors, namely OpenAI. +Dall-E 2, also known as just Dall-E, or unCLIP, has been advertised as a +successor of Dall-E 1, on which results it significantly improved. In reality, + +DALL-E +Photo- +Caption +Temp. +realism +Similarity +1.0 +91% +83% +No reranking +0.85 +84% +80% +1.0 +89% +71% +DALL-E reranked +0.85 +87% +69% +DALL-E reranked +1.0 +72% +63% ++ GLIDE blurred +0.85 +66% +61%文 +“"a crayon drawing of a space elevator" +"a futuristic city in synthwave style +“a pixel art corgi pizza" +"afog rolling into newyork"118 +3 Multimodal architectures +FIGURE 3.29: Failures happen mostly for unusual prompts. Figure from +Nichol et al. (2021b). +the architecture and the results it achieved are much more similar to that of +GLIDE. Additionally, social media has been flooded with images generated by +the model. This was possible thanks to OpenAI giving access to it to everybody +who was interested and patient enough to get through a waiting list. However, +the model itself again remains unpublished. Another factor that might have +contributed to Dall-E’s success were its inpainting and outpainting capabilities. +Although, it is worth mentioning they were already also possible with GLIDE. +In essence, UnCLIP is a very smart combination of pior work from OpenAI +that was re-engineered and applied in a novel way. Nevertheless, the model +represents a significant leap forward, which is why it cannot be omitted in this +chapter. +Dall-E 2 system +UnCLIP consists of two components: prior and decoder. Let x be the image +and y its caption. zi and zt are CLIP image and text embedding of this +(x, y) pair. Then, prior P(zi|y) is responsible for producing CLIP image +embeddings conditioned on the text caption. A decoder P(x|zi, y) outputs an +image conditioned on the CLIP image embedding and, again, the text caption +itself. +For the prior authors try two different approaches, namely autoregressive and +diffusion models. The latter ended up yielding slightly better results. The +diffusion prior isa Transformer taking as an input a special sequence of an +encoded text prompt, CLIP text embedding, embedding for the diffusion step, +and a noised CLIP image embedding. +The decoder consists of diffusion models again. Firstly, a GLIDE-like model +takes a CLIP image embedding as its xt instead of the pure noise that was used +in its original version. Similarly to the original GLIDE, classifier-free guidance +is applied, however with slight differences. Lastly, two diffusion upsampler +models are trained to bring images first from 64x64 to 256x256, and then from +256x256 to 1024x1024 resolution. The authors found no benefit in conditioning + +"an illustration of a cat +"a bicyclethathas continuous +that has eight legs" +tracks instead of wheels" +"a mouse hunting a lion" +"a car with triangular wheels"3.2 Text2Image +119 +these models on text captions. Finally, unCLIP can be summarized as a mixture +of GLIDE and CLIP with a lot of engineering behind it. +Results +When compared to GLIDE, unCLIP shows it is capable of representing a +wider diversity of the data, while achieving a similar level of photorealism and +caption similarity. Comparison to previous works on the MS-COCO dataset +shows that unCLIP achieves unprecedented FID (Figure 3.30). A few output +examples calculated on MS-COCO captions can be found in Figure 3.31. +FIGURE 3.30: Comparison of FID on MS-COCO. The best results for +unCLIP were reported with the guidance scale of 1.25. Figure from Ramesh +et al. (2022a). +Limitations +UnCLIP suffers from very similar problems as its predecessor GLIDE. First, +compositionality in the images tends to sometimes be confused by the model. +Failure cases can be seen in Figure 3.32. Second, UnCLIP struggles with gener- +ating coherent text inside an image (Figure 3.33). The authors hypothesize that +using CLIP embeddings, although improving diversity, might be responsible +for making these problems more evident than in GLIDE. Lastly, UnCLIP often +fails with delivering details in highly complex scenes (Figure 3.34). Again, +according to the authors, this might be a result of the fact that the decoder is +producing only 64x64 images which are later upsampled. +3.2.6 +Imagen & Parti +Only a few months after unCLIP was released by OpenAI, for the first time +Google came into play with its new autoregressive model called Imagen (Saharia +et al., 2022b). Another one followed just two months later - Parti (Yu et al., +2022b). Both of these models pushed the boundaries even further, although they +take entirely different approaches. None of them is introducing a completely new +way of looking at the problem of text-to-image generation. Their advancements + +Model +FID +Zero-shot FID +Zero-shot FID (filt) +AttnGAN (Xu et al., 2017) +35.49 +DM-GAN (Zhu et al., 2019) +32.64 +DF-GAN (Tao et al., 2020) +21.42 +DM-GAN + CL (Ye et al., 2021) +20.79 +XMC-GAN (Zhang et al., 2021) +9.33 +LAFITE (Zhou et al., 2021) +8.12 +Make-A-Scene (Gafni et al., 2022) +7.55 +DALL-E (Ramesh et al., 2021) +~ 28 +LAFITE (Zhou et al., 2021) +26.94 +GLIDE (Nichol et al., 2021) +12.24 +12.89 +Make-A-Scene (Gafni et al., 2022) +11.84 +unCLIP (AR prior) +10.63 +11.08 +unCLIP (Diffusion prior) +10.39 +10.87120 +3 Multimodal architectures +FIGURE 3.31: Image samples on MS-COCO text prompts. Figure from +Ramesh et al. (2022a). + +3.2 Text2Image +121 +FIGURE 3.32: ‘a red cube on top of a blue cube’ Figure from Ramesh et al. +(2022a). +FIGURE 3.33: ‘A sign that says deep learning.’ Figure from Ramesh et al. +(2022a). +come from engineering and further scaling existing solutions. However, it must +be stressed that currently (September 2022) they are delivering the most +outstanding results. +Imagen is a diffusion model. Its main contribution is that instead of using +a text encoder trained on image captions, it actually uses a huge pretrained +NLP model called T5-XXL (Raffel et al., 2019b) that is taken off the shelf and +frozen. Authors argue that this helps the model understand language much +more deeply, as it has seen more diverse and complex texts than just image +captions. +On the other hand, Parti takes an autoregressive approach. Similarly to the +first version of Dall-E, it consists of two stages, namely the image tokenizer +and sequence-to-sequence autoregressive part which is responsible for gener- +ating image tokens from a set of text tokens. In this case, ViT-VQGAN (Yu + +8 +. +(a) unCLIP +(b) GLIDEDiep +Deinp +Deep +DSNEELH +Lerpt: +Deep122 +3 Multimodal architectures +FIGURE 3.34: ‘A high quality photo of Times Square.’ Figure from Ramesh +et al. (2022a). +et al., 2021) is used as a tokenizer and the autoregressive component is again +Transformer-like. +Results +Both of the models improved the FID significantly compared to the previous +works. Figure 3.35 shows the comparison. +FIGURE 3.35: Comparison of FID on MS-COCO. Figure from Yu et al. +(2022b). +Samples from Parti can be seen in Figure 3.36. They are included here on +purpose - this is the current state-of-the-art as of the moment of writing! +Limitations + +MS-COCO FID (↓) +LN-COCO FID () +Approach +Model Type +Zero-shot Finetuned +Zero-shot Finetuned +Random Train Images [10] +2.47 +RetrievalBaseline +17.97 +6.82 +33.59 +16.48 +- +TReCS [46] +GAN +48.70 +XMC-GAN [47] +GAN +9.33 +14.12 +DALL-E [2] +Autoregressive +~28 +CogView [3] +Autoregressive +27.1 +- +1 +CogView2[61] +Autoregressive +24.0 +17.7 +GLIDE [11] +Diffusion +12.24 +Make-A-Scene [10] +Autoregressive +11.84 +7.55 +DALL-E 2 [12] +Diffusion +10.39 +Imagen [13] +Diffusion +7.27 +1 +Parti +Autoregressive +7.23 +3.22 +15.97 +8.393.2 Text2Image +123 +FIGURE 3.36: Selected samples from Parti. Figure from Yu et al. (2022b). +Yu et al. (2022b) mention an extensive list of problems, with which Parti still +struggles. At this point, all of them can be treated as a set that is common to +almost all available models. Among others, they touch: +• feature blending (where features of two different objects are missed) +• omission or duplicating details +• displaced positioning of objects +• counting +• negation in text prompts +and many many more. These flaws pose a challenge for future research and +undoubtedly they are the ones that need to be addressed first to enable another +leap forward in the field of text-to-image generation. +3.2.7 +Discussion +Lastly, it is important to mention a couple of different topics, or trends, which +are intrinsically linked with text-to-image generation. Together with previous + +ACE +BE +CELLEN +CFI +LENTTO +F. The aying “BE EXCELLENT TO EACH OTHER'124 +3 Multimodal architectures +sections, they should give the reader a holistic view of where research currently +stands (again, as of September 2022). +Open- vs closed-source +The first trend that has emerged only recently is AI labs to not open-source +their state-of-the-art models and training data. This is in clear opposition to +how the entire AI community was behaving from the very beginning of the +recent Deep Learning era Apparently, possible commercial opportunities that +come along with owning the software are too big to be ignored. The trend is +very disruptive - it is clear that the community is currently witnessing the +maturation of AI business models. Needless to say, it is followed by all the +greatest AI labs, just to name a few: OpenAI, DeepMind, Google Brain, Meta +AI, and many others. As long as commercial achievements will have an edge +over academic community research, it is highly doubtful that the trend will be +reversed. However, it needs to be stressed that all of them are still issuing more +or less detailed technical specifications of their work in the form of scientific +papers, which is definitely a positive factor. We, as a community, can only +hope it will not change in the future. +Open-Source Community +As the trend of closed-sourceness is clearly visible across many Deep Learning +areas, the text-to-image research is actually well represented by an open-source +community. The most important milestones of the recent years indeed come +from OpenAI, however, new approaches can be seen across a wide community +of researchers. Many of these models are public, meaning that any user with +minimal coding experience can play with them. Although we decided not to +go into details of particular works, it is important to name a few that became +the most popular: +• VQGAN-CLIP (Crowson et al., 2022) +• Midjourney (Midjourney, 2022) +• Latent Diffusion (Rombach et al., 2021) +• Stable Diffusion (Rombach et al., 2022) +Potential applications +Image generation that can be done in a controllable manner has undoubtedly +huge potential for commercialization. Although the field is currently still +very immature, hypotheses about which industries might be disrupted are +emerging. Essentially, every branch that has to do with generating visual art, +be it static images or videos, should observe the trend closely. Graphic design, +movie making, stock photos - just to name a few that might be interested. +Currently, experimental use cases in the area of texture synthesis, product +design, or building virtual reality worlds can already be observed. AI, even if +still incapable of generating the final product, can help automate a significant +part of the production chain, which essentially means time and money savings. + +3.3 Images supporting Language Models +125 +The inpainting and outpainting capabilities of recent models play a significant +role in this trend. Although it is still very hard to judge which direction it +takes in the future, it will definitely be a very interesting and disruptive change. +Who wouldn’t like to see movies being soon generated directly from a book’s +text, pixel value by pixel value? +Ethics / Conclusion +Automated image generation poses an array of serious questions of ethical +character. Fortunately, many of them are already very well recognized by +the community. For example, OpenAI elaborates extensively on the risks and +limitations of their Dall-E 2 in this blog post by Mishkin et al. (2022). A few +of the most important topics are presented here. +The first and very significant risk is the potential misuse of the models. Fake im- +age generation can easily be used for harassment and disinformation. Especially +combined with inpainting, which is capable of erasing or adding objects to real +scenes, it poses a non-trivial challenge for researchers on how to responsibly +share their work. +Another important area touches on biases and stereotypes which are intrinsi- +cally built into the technology. Obviously, a model combines concepts from +the data it has seen. However, if this area is to be commercialized, it needs to +ensure broader diversity. An interesting example of Dall-E 2 samples can be +seen in Figure 3.37. +In order to fully enable AI generation, the problem of copyrights needs to be +solved in the first place. It is definitely not clear who is the author of generated +images. Is it the person who came up with a text prompt and ran the model? Is +it a model engineer? The author of the model’s architecture? The owner of the +data it has been trained on? Or maybe the model itself? Another question is +what really is a creative contribution and eventually should result in copyright +being granted. These and many others definitely require extensive debate and +hopefully, legal solutions following it. +3.3 +Images supporting Language Models +Author: Giacomo Loss +Supervisor: Matthias Aßenmacher +3.3.1 +Words In (Non-Symbolic) Contexts +Imagine you were alone in a foreign country, you could not speak the language +and the only resource you had were a dictionary in the foreign language. You see + +126 +3 Multimodal architectures +FIGURE 3.37: Biased samples from Dall-E 2. Figure from Mishkin et al. +(2022). +a word written on a sign but you cannot understand its meaning. What could +you do? One idea would be do open the dictionary and look the word up. The +problem is that the word is defined by using other words in the foreign language. +As a second step you would thus look these new words up and continue like +that in further steps to the “infinity and beyond” (cit. Buzz Lightyear). But +even after looking every single word in the dictionary up, you would still not +be able to understand the meaning of the word written on the sign. If on +that sign, next to the unknown word, something else was instead depicted, for +example an image of a fork and a knife, you might speculate that the word +indicates something which has to do with food, like a restaurant. And this +without explicitly knowing the meaning of the word. This example is inspired +by the work of Stevan Harnad, which formulated at the beginning of the 90’s +the so called Symbol Grounding Problem (Harnad (1990)). It asserts that it is +not possible to understand the meaning (semantics) of a word by just looking + +Prompt: nurse; +Date: April 6, 20223.3 Images supporting Language Models +127 +at other words because words are essentially meaningless symbols. It is possible +to understand the meaning only if the word is put in a context, a perceptual +space, other than that of written language: the word must be grounded in +non-symbolic representations, like images, for example. Over the past 10 years +there has been a whopping development of distributional semantic models +(DSMs, henceforth), especially after the Word2vec (Mikolov et al. (2013b)) +revolution. This family of models assumes that the meaning of words and +sentences can be inferred by the “distribution” of those words and sentences +within a text corpus (the Distributional Hypothesis formulated by Harris et al. +(1954)). But the Symbol Grounding Problem mentioned earlier suggests that +DSMs do not resemble the way words are learned by humans, which is in +multimodal perceptual contexts. For these reasons, models have been developed +with the goal to integrate further modalities (like visual ones) in pure language +models, assuming that grounding words and sentences in other perceptual +contexts should lead to a better understanding of their semantics and, as a +result, to better performance in pure language tasks. +The focus of this subchapter are models which empower pure language models +with visual modalities in form of images: their goal is to obtain better semantic +representations (in form of embedding vectors) of words. First, a quick recap +of the main pure language models will be provided. After that, the historical +evolution of the integration of images as visual modalities into pure language +models will be discussed: from simple concatenation of textual and visual +modalities, to the projection of visual elements in a common grounded space +and more recently, the use of Transformers (see figure 3.38). Eventually, a +comprehensive evaluation of the different models against benchmarks will be +carried out. +Again, the focus is on how to employ visual elements to obtain embeddings +able to capture the semantics of words. More concrete applications, such as +those in the field of machine translation are out of scope and will be only +marginally addressed at the end of the subchapter. +FIGURE 3.38: Historical evolution of models which integrate visual infor- +mation into pure language models. + +- +2014... +...2016... +2019... +Sequential Embeddings +Grounded Embeddings +Transformers +Vokenization +Hill et al. +Collell et al. +VisualBERT +FLAVA +XDBERT +Bruni et al. +VILBERT +UniT +Kiela et al. +Kiela et al. +MM Skipgram +I LXMERT +Lu et al. +UFO +Bordes et al. +UNITER +FLAMINGO +Shahmohammadi +- +et al. +-128 +3 Multimodal architectures +3.3.2 +Word-Embeddings: Survival-Kit +In other parts of this books, the most important NLP-models and the latest +developments in the field are extensively described. In this section, some +information will be provided, which might be helpful to understand some of +the aspects discussed in this subchapter. As it may have been inferred in the +introduction, the starting point is always a pure language model, namely a +model which employs only textual inputs in order to generate word embeddings, +which are representations of words in form of numerical vectors. The most +widely used pure language models in the papers presented in this subchapter +are the following three: +• Skipgram (Word2vec, Mikolov et al. (2013b)), where given a target word, the +probability of the neighboring (surrounding) words in a pre-defined window +has to be maximized. Trainig takes place either through a hierarchical softmax +or through negative sampling, which involves maximizing the probability of +words which are real neighbors and minimizing that of words which are not +real neighbors (the “negative samples”) +• GloVe (Pennington et al. (2014)), which is based on words co-occurrence +across the entire corpus, with the goal of minimizing the difference between +the dot product of the embedding vectors of two words and the logarithm of +the number of co-occurrences +• BERT (Devlin et al. (2018c)): two pre-training tasks to obtain word- +embeddings: +– Masked Language Modelling (MLM): given a sentence with [MASK]ed +tokens, the goal is to predict these masked tokens +– Next Sentence Prediction (NSP): given two sentences A and B, the goal +is to predict if B follows from A +Two additional remarks to conclude this section. First, Skipgram and GloVe +generate embeddings which are “context-free”: they do not take into account +the context in which words occur. On the contrary, BERT is designed to +represent words given the context (sentence) in which they occur: we can +thus have different embeddings for the same word, depending on the context. +Second, the inputs of these models are tokens: with the help of a tokenizer, +which can be different for different models, the text is split in “chunks”, called +tokens (and they are not necessarily single words). +3.3.3 +The Beginning: Sequential Multimodal Embeddings +Supposing we add linguistic and visual feature representations related to +a particular word, how could we fuse them? One intuitive idea would be +to concatenate the textual and visual modalities. Let Vtext be the textual +(vectorial) representation of a word and let Vimg be its visual (vectorial) +representation, a fused representation F of a certain word w might take the +following simplified form: + +3.3 Images supporting Language Models +129 +F = γ(Vtext) +� +(1 − γ)Vimg +where γ is a tuning parameter which controls the relative contribution of both +modalities to the final fused representation. Bruni et al. (2014) propose a model +where the meaning of a target word is represented in the form of a semantic +vector and all vectors are collected in a text-based semantic matrix; textual +embeddings are computed based on (transformed) co-occurrence counts of +words in a pre-defined window. The starting point to obtain an image-based +representation of certain target word is a dataset of labeled images. For each +image associated to the target word (which means that the target word is to +be found in the image’s caption), low-level features called “local descriptors” - +which incorporate geometric information of specific areas of a certain picture +- are extracted and then these descriptors are assigned to clusters (bags) of +“visual words”1. Afterwards, for each target word, visual word occurrences are +summed up together to obtain the occurrence counts related to the target +word. These image-based semantic vectors are then transformed and collected +in an image-based semantic matrix. The two matrices are then concatenated +and projected into a common latent multimodal space with a singular value +decomposition. Thanks to this process a textual mixed matrix and a visual +mixed matrix are extracted and then combined together according to different +fusion strategies to build the multimodal embeddings. In this first, relatively +cumbersome (historically motivated) example, the vector representation of an +image is obtained with non-trivial features engineering. +In recent years, the use of neural networks has made an “automatic feature +selection” possible. This is what for example Kiela and Bottou (2014) propose, +extracting visual features from the first seven layers of a convolutional neural +network (proposed by Krizhevsky et al. (2012b)) trained on 1.6 million images +from the ImageNet database (Deng et al. (2009)), which produces scores +for 1,512 object categories. The linguistic part of the model relies on the +Skipgram model by Mikolov et al. (2013b) and consists of 100-dimensional +vector representations. The multimodal representation is again obtained by +concatenation of both modalities. +Another notable example of concatenation/sequential combination of textual +and visual modalities is the work of Silberer and Lapata (2014): textual +and visual modalities are represented by separate vectors of textual and +visual attributes. During training, these textual and visual inputs vectors are +separately fed to denoising (unimodal) autoencoders, the training objective +of which is the reconstruction of a certain corrupted input - e.g. through +masking noise - from a latent representation. Their outputs are then jointly +fed to a bimodal autoencoder to be mapped to a multimodal space, on which +1See for example Bosch et al. (2007) for more details on this technique, called “bag-of- +visual-words”. + +130 +3 Multimodal architectures +FIGURE 3.39: From Kiela and Bottou (2014). Textual and visual features +vectors are concatenated. +a softmax layer (classification layer) is added, which allows the architecture to +be fine-tuned for different tasks. +3.3.4 +The Grounded Space +The aforementioned models assume implicitly a one-to-one correspondence +between text and images: a visual representation is extracted only from words +which are associated to a concrete image. This is a limitation, for two partially +overlapping reasons. One one hand, how can we depict words for which no image +is available in our training set? Is it possible to imagine visual representations +purely from linguistic ones? On the other hand, could we hypothetically find a +visual representation for each word? This might be true for concrete words but +when it comes to abstract ones, it is not always possible to find suitable visual +representations or, said in other terms, many words are not visually grounded. +For this reasons, researches have addressed the question: could we map textual +and visual elements to a grounded space and design models able to generalize +images and words beyond those in the training set? Well, the answer is yes! +Lazaridou et al. (2015) propose a multimodal Skip-gram architecture where +the objective function of a Skip-gram is “augmented” with an additional visual +objective: +1 +T +T +� +t=1 +(Lling(wt) + Lvision(wt)) + +Training visual features (after Oguab et al.,2014) +Convolutional layers +Fully-connected layers +Imagenet labels +C1-C2-C3-C4-C5 +FC6 +African elephant +EC +FC8 +6144-dim +Wall clock +feature +vector +Multimodal word vector +Select images +C1-C2-C3-C4-C5 +Aggregate +FC6 +FC7 +from ImageNet or ESP +6144-dim feature vectors +Word +100-dim word projections +100-dim word projections +w(t-2) +w(t-2) +w(t) +w(t+1) +w(t+2) +Training linguistic features (after Mikolov et al., 2013)3.3 Images supporting Language Models +131 +where Lling is the Skip-gram loss function and Lvision is the additional visual +loss for the target word wt. In particular, Lvision has the form of a hinge loss, +the goal of which is to make the (vectorial) linguistic representation of a certain +word more similar to its visual representation: +Lvision(wt) = − +� +w′∼Pn(w) +(max(0, γ − cos(zwt, vwt) + cos(zwt, vw′)) +where vw′ is a visual representation of a randomly chosen word w +′ (drawn +from a probability distribution Pn(w)) used as negative sample, vwt is the +corresponding visual vector and zwt is the target multimodal word representa- +tion which has to be learned by the model. It is nothing more than a linear +transformation of a word representation uwt: zwt = M u→vuwt and M u→v is a +cross-modal mapping matrix from linguistic inputs to a visual representation. +It is important to remark that during training, for words which do not have +associated images, Lvision gets set to zero. When this cross-modal mapping +matrix is estimated, it is then possible to find a visual representation for new +words, which do not have a related image in the training set: the model allows +to imagine new words. This is what is meant with grounded space: a perceptual +(visual, in this case) space where a word is grounded, put in context. +FIGURE 3.40: From Lazaridou et al. (2015). The linguistic embedding of +the word ‘cat’ is mapped to a visual space, such that the similarity of vector +representations of words and associated images is maximized. +Similar instances of a cross-modal mapping can be found for example in Kottur +et al. (2016) (a multimodal extension of the CBOW model specification of +word2vec) and in Collell et al. (2017), where visual features are obtained from +the forward pass of a CNN, pre-trained on ImageNet (Deng et al. (2009)) and + +the cute little sat on the mat +CAT +Cing(wt) = maximize context prediction Cvision(wt) = maximize similarity +maptovisual space +cat +Mu-y132 +3 Multimodal architectures +a mapping function from the textual space to the visual space is obtained as +a result of the training process. Also in this case it is possible to generate a +visual representation from the embedding of a certain word, not necessarily +present in the training set. In particular, they propose two specifications of +the mapping function: a simple linear mapping and neural network with a +single hidden layer. Last but not least, Hill and Korhonen (2014) recognize +that concrete nouns are more likely to have a visual representation. For this +reason, they map a set of concrete words (CSLB, Devereux et al. (2014)) +to “bags of perceptual/visual features” and every time one of these words +is encountered during training, the Skip-gram model they are using stops +training on that sentence and instead continues the training on a newly created +“pseudo-sentence”, which takes into consideration the aforementioned bag of +perceptual features. This list is unfortunately not exhaustive and there are +other models with similar ideas, for example Ailem et al. (2018) or Kiros et al. +(2018). +The aforementioned papers and related models focus on the modeling of +semantics of words. Nonetheless, there are models designed to address tasks at +sentence-level, such as sentiment analysis or sentence entailment. Kiela et al. +(2017) employ a bidirectional Long Short-Term Memory (LSTM, Hochreiter +and Schmidhuber (1997)) architecture to model sentence representations, in +order to gain information from the text in both directions. The goal is again +to encode a sentence and ground it in an image. Textual embeddings are +obtained with GloVe (Pennington et al. (2014)) and they are then projected +on a grounded space with a linear mapping. This grounded word vector +serves as input for the bidirectional LSTM, which is trained together with the +linear mapping. Their model is versatile and depending on the loss function +specification, it can not only propose alternative captions to an image (which +is a way to frame sentence equivalence tasks) but also predict captions from +images or perform both tasks at the same time. This last point highlights an +important characteristic of many of the models discussed in this subchapter: +even though the focus is on the empowerment of pure language models with +the addition of visual elements, some of the models discussed here can be used +for purposes other than pure language tasks. The control over which task is +performed is usually exercised by either specifying different loss functions (as +in the last model described) or setting properly certain hyperparameters (such +as in the previously described model by Silberer and Lapata (2014)). +3.3.5 +The Transformers Era +A turning point for the field of NLP was Vaswani et al. (2017b)’s paper +“Attention is all you need”, where the authors proposed for two machine +translation tasks a novel architecture, the Transformer (not to be confused with +the giant robots from the Michael Bay’s blockbuster movies!), which leverages +only the attention mechanism. Even though an exhaustive description of the + +3.3 Images supporting Language Models +133 +Transformer architecture is beyond the scope of this subchapter, it is worth +mentioning why they became so popular over the past four years in the field +of NLP (among others), in comparison to Recurrent Neural Networks (RNNs) +and Long Short-Term Memory networks (LSTMs). +Well, the three main properties of Transformers are the following: +• Self-Attention +• Parallel input processing +• Positional embeddings2 +When feeding for example a textual sentence to a RNN, the network deals with +one word after the other in a sequential fashion and one of the known issues is +the fact that information contained in earlier parts of the sequence tend to +“fade away” as the sentence is analyzed further: newer inputs carry a larger +influence on the outputs at a given step. LSTMs try to mitigate this problem +by introducing a component called “gate”, which regulates the information +flow, namely which information from the past inputs need to be “remembered” +by the model. The goal is to capture long-term dependencies among different +parts of the sentence fed into the model. +On the contrary, thanks to the Self-Attention mechanism, at each step Trans- +formers can access previous steps, thus limiting to a minimum the loss of +information. Moreover, inputs are processed not sequentially but all at the +same time, thus allowing to capture dependencies by looking at the sentence +as a whole and this could make a fundamental difference in many down- +stream applications: for example in the German language, in dependent clauses +(“Nebensaetze”), the verb comes at the end of the phrase but it determines the +verbal case of the nouns that come before the verb. Thus Transformer could +potentially capture the dependencies between the verb coming at the end of +the sentence and the words at the beginning. Lastly, Transformers encode for +every input information on its position within a sentence, since it is often the +case, that the importance and meaning of a certain word varies depending on +its position within a sentence. These were the Transformers, in a nutshell. +But Transformers did not only bring a change of paradigm in terms of architec- +tures. First, while for models in the pre-Transformers era described before, the +focus was on the ability of word embeddings to capture similarity among words, +now the focus has shifted more on downstream tasks (more on this later in the +evaluation section), encompassing not only pure linguistic ones but also tasks +with visual components, such as for example, visual question answering. It is +now more difficult (but not impossible) to draw a line between models where +“images support pure language models” (the object of this subchapter) and +models which could be actually categorized as “vision and language” models +but they can be employed also to solve pure linguistic tasks. This issue brings +2It may be argued that this point is a necessity to be able to work on sequences rather +than a strength. + +134 +3 Multimodal architectures +another peculiarity of many Transformers-base models, namely their “universal +vocation”: without loss of generality we could say that the idea is now to design +powerful (multimodal) pre-training (mostly self-supervised) tasks capable of +generating task-agnostic representations, whose encoded knowledge can be +efficaciously transferred to diverse downstream tasks, limiting the amount of +labeled data necessary to fine-tune the models (this is the so-called few-shot +learning). +Let’s briefly discuss two examples, Flava (Singh et al. (2022)) and UniT (Hu +and Singh (2021a)). Flava has two separate encoders for images and text and +a multimodal encoder, all based on the Vision Transformer (Dosovitskiy et al. +(2020a)). Unimodal pre-training consists of masked image modeling (where +a set of image patches are to be reconstructed from other unmasked image +patches) and masked language modeling. Multimodal pre-training tasks consist +instead of a global contrastive loss (maximization of cosine similarities between +paired images and text), a masked multimodal modeling (where image patches +and text tokens are masked) and an image-text matching task. The model is +pre-trained jointly on unimodal and multimodal datasets and then evaluated +(fine-tuned) on 22 vision tasks, 8 pure linguistic tasks and 5 vision and language +tasks. +UniT has an image encoder and a text encoder, a multimodal domain-agnostic +decoder and task-specific heads. There is no pre-training on multimodal data +and the model is trained end-to-end on 7 tasks (vision, language and vision +an language) and 8 datasets, with the idea that solving different tasks across +domains in a jointly fashion should prevent general knowledge from being lost +due to fine-tuning over particular downstream tasks. +These two examples clearly show what it is meant by “universal vocation” of +many modern Transformer-based models. But there are still models specifically +designed to solve pure language tasks and in the following pages, two of them +will be described. +3.3.5.1 +Vokenization +It is often difficult for a child to describe the meaning of a certain word. A +child might not be able to describe what a lion is but if he is given pictures of +different animals he might be very well able to point at the picture of a lion. +Visual pointing could thus act as a form of supervision to natural language. Is +it possible to build within a pure language model a form of visual supervision, +which mimics the visual pointing often adopted by children? This is exactly +the problem that Tan and Bansal (2020) try to address: how to associate to +each textual representation (token) a visual representation (Voken). +Let’s suppose we had a dataset of word(token)-image pairs. We could integrate +in the pre-training framework of pure language models the following Voken- +Classification task: + +3.3 Images supporting Language Models +135 +LV OKEN−CLS(s) = − +l +� +i=1 +log pi(v(wi; s)|s) +h1, h2, ..., hl = languagemodel(w1, w2, ..., wl) +pi(v|s) = softmaxv{Whi + b} +where {hi} is the feature representation of each token in a sentence s = {wi} +extracted from a language model (such as BERT) and the vokens originate +from a finite set of images X. Each hi is then transformed into a probability +distribution through a softmax layer, with the voken-classification loss defined +as the negative log-likelihood of all related vokens. +The model architecture would then be: +FIGURE 3.41: From Tan and Bansal (2020). Visually supervised the lan- +guage model with token-related images, called Vokens. +Everything sounds fantastic! There is only one small pitfall: a set of X of +images for all tokens does not exist! Could we find a proxy for such a set? +One might consider image-captioning datasets such as MS COCO (Lin et al. +(2014b)). But also this suboptimal solution is problematic. +The Grounding Ratio is defined as the proportion of tokens in a dataset which +are related to a specific visual representation (i.e. the tokens are visually +grounded), such as “dog”, “table” and the like. In figure 3.42 it is striking +that only around one third of tokens contained in pure language corpora such +Wiki103, English Wikipedia and CNN/DM are visually grounded in image +captioning datasets3. It is not possible to rely (only) on image captioning +datasets to build the Voken-Classification task. But the fact that a word/token +does not have a visual representation in one of these datasets, it does not +mean that it is not possible to visually represent the word/token. Would it be +possible to associate images to words/tokens not directly visually grounded? +Well, the answer is yes! +3From an operative point of view, the authors consider a token type “visually grounded” +if it has more than 100 occurrences in MS COCO + +Visual +Vokens (Token-Related Images) +Supervision +nglish +Visually- +Supervised +Language +Model +Vokenization +Humans learn language by listening, speaking +Language +Language Tokens +Input136 +3 Multimodal architectures +FIGURE 3.42: From Tan and Bansal (2020). Statistics of image-captioning +dataset and other natural language corpora. VG, CC, Eng Wiki, and +CNN/DM denote Visual Genome, Conceptual Captions, English Wikipedia, +and CNN/Daily Mail, respectively. JSD represents Jensen–Shannon divergence +to the English Wikipedia corpus. +FIGURE 3.43: From Tan and Bansal (2020). The Vokenization process. A +contextualized image (visual token, Voken) is retrieved for every token in a +sentence and with this visual token, visual supervision is performed. +The Vokenization is a process to assign every token wi contained in a sentence +s to a visual representation (called voken) originating not from a generative +model but rather from a finite set of images X = {x1, ..., xn}. The voken +v(wi; s) is the image from X which maximizes the following Relevance Score +Function: +v(wi; s) = arg maxx∈Xrθ∗(wi, x, s) +This function takes into account not only the token wi itself, but also the +context (the sentence) and it is parametrized by θ with θ∗ being the optimal +value (which has to be estimated). +3.3.5.1.1 +The Relevance Score Function: Model, Training, Inference +The Relevance Score Function is defined as the inner product of the language +feature representation fθ(wi, s) and the visual feature representation gθ(x): + +Dataset +# of Tokens +# of Sents +Vocab. Size +Tokens #/ Sent. +1-Gram JSD +2-Gram JSD +Grounding Ratio +MS COCO +7.0M +0.6M +9K +11.8 +0.15 +0.27 +54.8% +VG +29.2M +5.3M +13K +5.5 +0.16 +0.28 +57.6% +CC +29.9M +2.8M +17K +10.7 +0.09 +0.20 +41.7% +Wiki103 +111M +4.2M +29K +26.5 +0.01 +0.05 +26.6% +Eng Wiki +2889M +120M +29K +24.1 +0.00 +0.00 +27.7% +CNN/DM +294M +10.9M +28K +26.9 +0.04 +0.10 +28.3%Vokenization +Visual +Supervision +Nearest Neighbor Search +Vokens +- +Visually- +Supervised +Visual +Language +Language +Encoder +Encoder +Model + Language +Tokens +Input +Image +Language +Tokenizer +Set +Corpus3.3 Images supporting Language Models +137 +fθ(wi, s)T gθ(x) +Supposing h1, ..., hl and e are the embeddings originating from pre-trained +language and visual encoders respectively (in the paper the authors use BERT +and ResNeXt), the language and visual representations are obtained first +by applying multi-layer perceptrons w_mlpθ and x_mlpθ to downproject +the embeddings from the pre-trained models to a common vector space and +secondly they are normalized (with L2-Norm): +fθ(wi; s) = +w_mlpθ(hi) +||w_mlpθ(hi)|| +gθ(x) = +x_mlpθ(e) +||x_mlpθ(e)|| +With respect to the training of the model, to estimate the optimal value for the +parameter θ, image-captioning datasets, which are collections of sentence-image +pairs, are employed. Operationally, for every sentence sk associated to image +xk in the image-captioning dataset, each token wi in s is associated to xk and +the hinge loss is used to estimate the optimal value of θ∗: +Lθ(s, x, x′) = +l +� +i=1 +max(0, M − rθ(wi, x, s) + rθ(wi, x′, s)) +The goal is to maximize the Relevance Score Function between aligned token- +image pairs (wi, x; s) and to minimize the score for unaligned pairs (wi, x +′; s) +by at least a margin M, with x +′ being a randomly sampled image from the +image captioning dataset not associated to sentence s. +Once we have the language feature representation fθ(wi, s) for each token +in our language corpus and the optimal estimate of θ, how is it possible to +find the image x encoded with the visual feature representation gθ(x), which +maximizes the Relevance Score Function? As said earlier, the function is +expressed as the inner product of the textual and visual representations and +since the feature vectors have euclidean norm equal to 1, the inner product +maximization problem is equivalent to a nearest neighbor search problem. +It is just sufficient to find the vector gθ(x) which is the nearest neighbor of +fθ(wi, s)4. +With this process, it is thus possible to assign a visual representation, a voken, +to any word/token in a language corpus, pooling from a finite set of images. +The problem of the low Grounding Ratio outlined above is solved and the +Voken-Classification task could be integrated in the pre-training framework +4The proof is straightforward. Let X ∈ Rl and have euclidean norm equal to 1, which +means ||X||2 = 1. In the nearest neighbor search we need to find the vector Y ∈ Rl, also +with norm equal to 1, which has minimal euclidean distance with X. This is the quantity to + +138 +3 Multimodal architectures +of any pure language model. Moreover, the authors propose a method called +Revokenization, which allows to transfer vokens generated using a particular +tokenizer to frameworks which employ other tokenizers. +3.3.5.2 +One Step Further: The Power Of Imagination +Wikipedia defines imagination as “the production or simulation of novel objects, +sensations, and ideas in the mind without any immediate input of the senses”. +Indeed, humans do not only associate words with real images, but also leverage +the ability to imagine words/concepts: imagination can help the human brain +solve problems with limited supervision or sample points by empowering its +generalization capabilities. Until now we discussed language models supported +by visual information in form of real images (e.g. those retrieved from image- +captioning datasets). But with the recent advancements in the field of generative +models for images, it is for sure worth investigating if these generative models +can help pure language models to produce better representations of words. In +particular, the framework proposed by Lu et al. (2022), iACE (Imagination- +Augmented Cross-Modal Encoder) will now be discussed: the idea is +simply to use a generative model to obtain a visual representation of a textual +input and then use these imagined representations as “imagination supervision” +to pure language models. +This framework has two main components: +• the imagination generator G: given an input text x, VQGAN (Esser et al. +(2021)) is used to render an “imagination” i of x and CLIP (Radford et al. +(2021a)) is used to see how well the generated image i is aligned to the input +text x. This generative framework is known as VQGAN+CLIP +• Cross-modal Encoder Ec: the input text and the rendered imagination +are firstly encoded with a language and a visual encoder respectively and then +be minimized: +d(X, Y ) = +� +� +� +� +l +� +i=1 +(xi − yi)2 +squared += +l +� +i=1 +x2 +i + +l +� +i=1 +y2 +i − 2 +l +� +i=1 +xiyi += ||X||2 +2 + ||Y ||2 +2 − 2XT Y +Norm−1 += +1 + 1 − 2XT Y += 2(1 − XT Y ) +And through these simple algebraic manipulations, it is possible to see that minimizing +the euclidean distance between X and Y is equivalent to maximize XT Y , which is the +inner product. This proves the equivalence between inner product maximization and nearest +neighbor search. + +3.3 Images supporting Language Models +139 +FIGURE 3.44: From Lu et al. (2022). The generator G visualize imaginations +close to the encoded texts by minimizing LGAN. The cross-modal encoder +Ec learns imagination-augmented language representation. Two-step learning +procedure consists of: 1) pre-train a Transformer with visual supervision from +large-scale language corpus and image set, 2) fine-tune the visually supervised +pre-trained Transformer and the imagination-augmented cross-modal encoder +on downstream tasks. +CLIP is employed as cross-modal encoder with inputs being text-imagination +pairs +The learning procedure is composed of two main steps (depicted in figure 3.44): +the first step consists in the pre-training of a visually supervised Transformer. +In particular, the Voken-Classification task described before is employed, +alongside a masked language modeling task. This is the baseline model, where +no information from the “imagination” procedure comes yet into play. The +second step is the imagination-augmented fine-tuning with two downstream +datasets D (GLUE, Wang et al. (2018) and SWAG, Zellers et al. (2018)). +On one side, the visually-supervised Transformer (the baseline) relies only on +the textual input during the fine-tuning phase and the following loss function +is employed: +LLang = − +|D| +� +j=1 +K +� +k=1 +yk log pk(dj(t)|D) +On the other hand, the iACE is trained to minimize the following cross-entropy +loss: +LImagine = − +|D| +� +j=1 +K +� +k=1 +yk log pk(dj(t, v)|D) + +Step l:Pre-training on Large-scale Language and Vision Datasets +Language +Image +Corpus +Set +Step 2: Fine-tuning on Downstream NLU Tasks +LLang +Visually +Text Input +Supervised +Limagine +Transformer +Cross-Modal Encoder +Language +Encoder +Language +Generator Imagination +Vision +Encoder +Encoder +LGAN140 +3 Multimodal architectures +with t and v being the textual and imagined features representations respec- +tively, j indicates the j-th data sample in dataset belonging to dataset D, K +is the number of classes and pk is the conditional distribution of dj. Training +takes place in a jointly fashion and both losses, the imagination-augmented +one LImagine and the pure language loss LLang are linearly combined, with λ +being a balance factor: +L = λLImagine + (1 − λ)LLang +To sum up, this model-agnostic framework uses generated images for visual +supervision and could be integrated on top of pure language models (such as +BERT) or visually supervised models (such as the Voken model, which uses +Vokens, real images for visual supervision). +3.3.6 +Was It Worth? +In this subchapter we investigated how visual inputs can support pure language +models in capturing the semantics of words. We started with simple concate- +nation of linguistic and visual features and ended up with Transformer-based +models, which are able to shape different word embeddings for the same word +by taking into account also the context (the sentence). But now the question +arises: with the addition of visual information, do we obtain word embeddings +that are better than those from pure language models? In other words, is what +we all have so far discussed worth? Well, as it is often the case in scientific +research, the answer is: “it depends!” +Individual evaluation of each single model might not be ideal because each +model has its peculiarities and it is impractical to make a direct comparison +among them. It is more useful to capture and discuss the themes which are +common to many models, in order to understand their strengths and weaknesses. +This is how we will proceed and we will also differentiate between evaluation +before Transformers and evaluation after Transformers. +3.3.6.1 +Evaluation In The Pre-Transformers Era +Before the advent of Transformers, the evaluation focus was on the degree of +alignment between learned semantic representations (word embeddings) and +representations by human speakers, in form of correlation between model-based +and human-based word-similarity judgments. Three main types of similarity +are usually considered: +• Semantic similarity, e.g. “pasta is similar to rice” +• Semantic relatedness, e.g. “Bear is related to mountain” +• Visual similarity, e.g. “cucumbers look like zucchinis” +The evaluation pipeline could be summarized as follows: + +3.3 Images supporting Language Models +141 +FIGURE 3.45: Pipeline for intrisinsic evaluation of semantic representations. +In the first step, the cosine similarity between two word embeddings w1 and w2 +is used as similariry measure and in a second step, the correlation with human +speakers’assessment is computed to gauge the quality of the embeddings. The +higher the correlation, the better the embeddings. +Word embeddings are vectors and to measure the degree of similarity between +two vectors, the Cosine Similarity is often used in the literature. In an ideal +setting, we would have word embeddings with the following characteristics: +if two words are semantically similar, the two embedding vectors should be +similar and their cosine similarity should go towards 1. If the two words are +unrelated, the embedding vectors should be orthogonal to each other and as a +consequence, the cosine similarity should go towards zero. Lastly, if two words +are negatively related, the two embedding vectors should point at opposite +directions and the cosine similarity should go towards -1. Once these similarity +measures between word pairs are computed, in order to measure the quality of +the embeddings several benchmarks can be employed, such as MEN (Bruni +et al. (2014)), WordSim353 (Agirre et al. (2009)) and SimLex999 (Hill et al. +(2015)). These datasets could be described as collections of word pairs and +associated similarity ratings by human speakers. Operationally, this means +that real people were asked if a pair of words was related or not and to +which degree, on a scale between -1 (negatively related) to +1 (semantically +equivalent). The higher the correlation between the cosine similarity and the +similarity judgments by humans, the higher the quality of the word embeddings. +Having done this methodological premise, let’s discuss the performance of +these pre-Transformer models! +Since the goal of these models is to enhance pure language models with the +addition of visual inputs, the baseline in the evaluation is always one (or more) +pure language model(s). Well, do visually grounded embeddings outperform +non-grounded ones? What emerges from virtually all papers is that visual +grounding can actually help get a better semantic representation of concrete +concepts, such as “cat”, “table”, “bicycle”, whereas they do not help much +with the representation of abstract concepts such as “love” and “peace”. +3.3.6.2 +Evaluation In The Post-Transformers Era +A limitation of the intrinsic evaluation metrics is the high degree of subjec- +tivity: the similarity between two concepts depends in many instances on the +experience, cultural background and preferences of the human observers. This +is why the evaluation focus has now shifted to a more extrinsic dimension: how +well do the models perform in downstream tasks? The problem of the “lack + +Cosine Similarity +W1: W2 +Correlation with human +Word pairs (w1,w2) +ratings +[ / W1|| / /w2]142 +3 Multimodal architectures +FIGURE 3.46: From Hill and Korhonen (2014): Each bar represents a differ- +ent model settings and the dashed line indicates the pure linguistic benchmark +model. In figure 3.46 we can see that pure language models still perform +better than models with visual inputs when it comes to the representation +of abstract nouns. Another example is Kiela et al. (2017): they found that +their models perform better when tested on datasets with a higher degree of +concreteness and the same conclusion is reached by Collell et al. (2017), which +state that visual information can empower the representations of concepts that +are to a certain extent visual. To sum up, effective semantic representation +of abstract concepts constitute the main limitation common to many of the +models discussed in this section. +of objectivity” is thus solved because on downstream tasks there is no room +for opinions. The datasets used to train the models are also different and the +most widely used are: +• GLUE (Wang et al. (2018)): 9 tasks, including single-sentence tasks (e.g. sen- +timent analysis), similarity tasks (e.g. paraphrasing), inference tasks (e.g. tex- +tual entailment) +• SQuAD (Rajpurkar et al. (2016)): question/answer pairs +• SWAG (Zellers et al. (2018)): multiple choice questions about grounded +situations +As previously discussed, many Transformer-based models have universal voca- +tion: they are built to solve a heterogeneous range of tasks from the language +and vision domain. If we thus consider only performance on pure language +tasks, the following two tables from Tan and Bansal (2020) are insightful: +It is straightforward: unlike in the pre-Transformers Era, where grounded word +embeddings could improve performance over baselines, Transformer-based +universal models do not outperform pure language models such as BERT +or RoBERTa. Nonetheless, the addition of visual supervision (the Voken- + +0.4 +0.364 +Propagation Method + Johns and Jones +Ridge Regression +Our Model (α=1) +0.3 +-0.232 +lation +0.265 +0.25 +0.236 +0.225 +0.197 +Corre +0.116 +0. +-- +- +- +0.07 +0.08 +0.0 +abstract nouns +all nouns +concrete verbs +abstract verbs +all verbs3.3 Images supporting Language Models +143 +FIGURE 3.47: From Tan and Bansal (2020). Statistics of image-captioning +dataset and other natural language corpora. VG, CC, Eng Wiki, and +CNN/DM denote Visual Genome, Conceptual Captions, English Wikipedia, +and CNN/Daily Mail, respectively. JSD represents Jensen–Shannon divergence +to the English Wikipedia corpus. +FIGURE 3.48: From Tan and Bansal (2020). Fine-tuning results of different +pre-trained models w/ or w/o the voken classification task (denoted as“Voken- +cls”). +Classification task) in the pre-training framework can boost performance above +the level of pure language models. +Pezzelle et al. (2021) analyzed the intrinsic quality of embeddings of some +vision and language (“universal”) models: +From this intrinsic evaluation perspective (which was popular in the pre- +Transformers Era), vision and language models do not generally outperform +domain-specific models such as BERT and also in this case the only real +competitor of pure language models is a model with visual supervision (again, +Vokenization). +The bar plots depict correlation between human- and model-based similarity +ratings, differentiating between the most concrete concepts contained in a + +Model +Init. with BERT? +Diff. to BERT Weight +SST-2 +QNLI +QQP +MNLI +ViLBERT (Lu et al., 2019) +Yes +0.0e-3 +90.3 +89.6 +88.4 +82.4 +VL-BERT (Su et al., 2020) +Yes +6.4e-3 +90.1 +89.5 +88.6 +82.9 +VisualBERT (Li et al., 2019) +Yes +6.5e-3 +90.3 +88.9 +88.4 +82.4 +Oscar (Li et al., 2020a) +Yes +41.6e-3 +87.3 +50.5 +86.6 +77.3 +LXMERT (Tan and Bansal, 2019) +No +42.0e-3 +82.4 +50.5 +79.8 +31.8 +BERTBASE (Devlin et al., 2019) +- +0.0e-3 +90.3 +89.6 +88.4 +82.4 +BERTBASE + Weight Noise +- +6.5e-3 +89.9 +89.9 +88.4 +82.3Method +SST-2 +QNLI +QQP +MNLI +SQuAD v1.1 +SQuAD v2.0 +SWAG +Avg. +BERT6L/512H +88.0 +85.2 +87.1 +77.9 +71.3/80.2 +57.2/60.8 +56.2 +75.6 +BERT6L/512H + Voken-cls +89.7 +85.0 +87.3 +78.6 +71.5/80.2 +61.3/64.6 +58.2 +76.8 +BERT12L/768H +89.3 +87.9 +83.2 +79.4 +77.0/85.3 +67.7/71.1 +65.7 +79.4 +BERT12L768H + Voken-cls +92.2 +88.6 +88.6 +82.6 +78.8/86.7 +68.1/71.2 +70.6 +82.1 +RoBERTa 6L/512H +87.8 +82.4 +85.2 +73.1 +50.9/61.9 +49.6/52.7 +55.1 +70.2 +RoBERTa 6L512H + Voken-cls +87.8 +85.1 +85.3 +76.5 +55.0/66.4 +50.9/54.1 +60.0 +72.6 +RoBERTa 12L/68H +89.2 +87.5 +86.2 +79.0 +70.2/79.9 +59.2/63.1 +65.2 +77.6 +RoBERTa 12L/768H + Voken-cls +90.5 +89.2 +87.8 +81.0 +73.0/82.5 +65.9/69.3 +70.4 +80.6144 +3 Multimodal architectures +FIGURE 3.49: From Pezzelle et al. (2021). Spearman’s rank correlation +between similarities computed with representations by all tested models and +human similarity judgments in the five evaluation benchmarks. +FIGURE 3.50: From Pezzelle et al. (2021). Correlation between model and +human similarity ratings on WordSim353, SimLex999 and MEN. Each barplot +reports results on both the whole benchmark and the most concrete subset of +it. +certain dataset5 and the whole dataset (thus including more abstract concepts). +The results confirm the trend: multimodal models are more effective than pure +language models at representing concrete words but in many instances they +still lag behind when it comes to more abstract concepts. +Last but not least, few words need to be spent on a topic which has been +steadily gaining relevance: Few-Shot Learning. To train and test models, a +large pool of paired images and texts is often needed and the creation of many +of the datasets used in fine-tuning required a huge data collection effort, which +had to be performed by human agents. This implies that the creation of such +data pools can be very costly. For this reason, there is a growing interest in +creating models able to cope with low-resource settings. This boils down to +the question: can a model perform well on downstream tasks even with just +a limited number of training examples? The goal is actually once again, to +5See Brysbaert et al. (2014) for information on how concreteness of a word can be +estimated. + +model +input +Spearman p correlation (layer) +RG65 +WS353 +SL999 +MEN +SVERB +BERT-1M-Wiki* +L +0.7242 (1) +0.7048 (1) +0.5134 (3) +0.3948 (4) +BERT-Wiki ours +L +0.8107 (1) +0.7262 (1) +0.5213 (0) +0.7176 (2) +0.4039 (4) +GloVe +L +0.7693 +0.6097 +0.3884 +0.7296 +0.2183 +BERT +L +0.8124 (2) +0.7096 (1) +0.5191 (0) +0.7368 (2) +0.4027 (3) +LXMERT +LV +0.7821 (27) +0.6000 (27) +0.4438 (21) +0.7417 (33) +0.2443 (21) +UNITER +LV +0.7679 (18) +0.6813 (2) +0.4843 (2) +0.7483 (20) +0.3926 (10) +ViLBERT +LV +0.7927 (20) +0.6204 (14) +0.4729 (16) +0.7714 (26) +0.3875 (14) +VisualBERT +AT +0.7592 (2) +0.6778 (2) +0.4797 (4) +0.7512 (20) +0.3833 (10) +Vokenization +Lv +0.8456 (9) +0.6818 (3) +0.4881 (9) +0.8068 (10) +0.3439 (9)WS353 +SL999 +MEN +0.75- +concrete +0.60- +concrete +0.85- +concrete +whole +whole +whole +0.70- +0.55- +0.80- +0.65- +0.50 +0.75 +0.60- +0.70- +0.55- +0.40- +0.65- +BERT LXMERT UNITER VILBERT +BERT LXMERT UNITER ViLBERTis3.3 Images supporting Language Models +145 +mimic how humans learn: a person does not need to see one thousand pictures +of a table, to be able to recognize a table. . . +FIGURE 3.51: From Lu et al. (2022). Model-agnostic improvement in Few- +shot Setting with GLUE benchmark. +This table from Lu et al. (2022), where models are trained using only up to +5% of the training set, shows for example the ability for a model supervised +with “imagination” (which was a generated visual representation of a certain +textual input) to outperform models with only simple visual supervision (the +Voken-model). This is just an example, but the ability to perform well in few- +shot settings has become the touchstone of the evaluation modern multimodal +models. +3.3.7 +The End Of This Story +We started this story with the Symbol Grounding Problem, which affirms that +to grasp the meaning of a word, the word has to be put in a context other +than the pure linguistic one. We thus investigated some of the architectures +proposed to ground words in a visual space in form of static images. The goal +(hope) is to better capture the semantics of words, in form of better word +embeddings, to be employed in heterogeneous tasks, from semantic-similarity +to downstream tasks, such as sentiment analysis. +From this brief analysis it emerges that grounding words in images can actually +improve the representation of concrete concepts, whereas visual grounding +does not seem to add value to pure language models when it comes to abstract +concepts. Nonetheless, forms of visual supervision like the Voken-Classification +task or the employment of generative models which allow to imagine words, +such as in the iACE-Framework, might be the right way to bridge this gap. +The Transformers have been a revolution in the field of NLP and with their +advent, the trend has now become to build models with pre-training tasks +capable of generating powerful task-agnostic word representations. The knowl- +edge gained with these tasks can be then transferred to downstream tasks with +the goal to limit the amount of labeled data necessary to fine-tune models. +Labeling data is indeed costly: this is why the ability of a model to generalize + +SST-2 +QNLI +QQP +MNLI +Extreme Few-shot +0.1% +0.3% +0.5% +0.1% +0.3% +0.5% +0.1% +0.3% +0.5% +0.1% +0.3% +0.5% +VOKEN(Bertbase) +54.70 +77.98 +80.73 +50.54 +51.60 +61.96 +44.10 +60.65 +65.46 +37.31 +54.62 +58.79 +iACE(Bertbase) +77.98 +80.96 +81.42 +51.64 +58.33 +64.03 +49.36 +63.67 +71.17 +40.07 +56.49 +59.57 +VOKEN(Robert abase) +70.99 +71.10 +77.86 +54.37 +62.23 +65.78 +62.32 +67.25 +70.18 +48.59 +49.76 +58.23 +iACE(Robertabase) +75.34 +78.66 +83.60 +54.79 +65.03 +65.83 +65.43 +68.11 +70.77 +48.94 +52.74 +59.39 +Normal Few-shot +1% +3% +5% +1% +3% +5% +1% +3% +5% +1% +3% +5% +VOKEN(Bertbase) +81.40 +86.01 +84.75 +64.17 +77.36 +80.19 +72.55 +78.37 +80.50 +60.45 +62.73 +72.35 +iACE(Bertbase) +82.45 +87.04 +86.47 +65.09 +79.54 +80.52 +74.31 +78.69 +80.52 +62.15 +70.43 +73.73 +VOKEN(Robert abase +83.78 +84.08 +87.61 +75.00 +81.16 +81.23 +73.14 +79.09 +79.63 +63.51 +70.68 +74.02 +iACE(Robert abase ) +83.83 +84.63 +89.11 +79.35 +81.41 +81.65 +73.72 +79.38 +79.81 +65.66 +70.76 +74.10146 +3 Multimodal architectures +well when exposed to just few training examples has been steadily gaining +importance as evaluation metric. This was the so called few-shot learning. +Moreover, Transformer-based models have “universal vocation”: they tend to +be multimodal and multi-task, encompassing vision, language and vision and +language tasks. This idea might be appealing because humans learn by being +exposed to a multitude of different inputs and tasks. But as we have seen, pure +language models such as BERT tend to still outperform multimodal multi-task +models. There is definitely room for improvement. +One might wonder whether the grounding of words in images is the right way +to seek a better representation of words. Well, humans learn using all five +senses and maybe the answer might be to incorporate in the models more +heterogeneous perceptual information: not only static images but also videos, +speech and the like. The debate is still open: the story goes on. . . +Last but not least, a mention needs to be made on concrete applications of +these image-empowered word-embeddings. The use of images to support lin- +guistic models has been experimented in several fields, from Dialogue Response +Generation (e.g. Sun et al. (2021)) to Machine Translation, where for example +Ive et al. (2019) found images to improve the quality of translation when +the textual context is generic and/or ambiguous. The number of potential +applications of the models described in this subchapter is growing steadily in +the scientific community. But this is yet another story. . . +3.3.8 +Appendix: Selected Models - Summary +A table (available here) contains a summary of selected language models +augmented with visual components. For each model, the following information +are reported: +• Pure language model and pretraining data +• Visual features and pretraining data +• Fusion strategy of the two modalities +• Benchmarks/baselines for evaluation +3.4 +Text supporting Vision Models +Author: Max Schneider +Supervisor: Jann Goschenhofer + +3.4 Text supporting Vision Models +147 +3.4.1 +Introduction +“The biggest lesson that can be read from 70 years of AI research +is that general methods that leverage computation are ultimately +the most effective, and by a large margin. [. . . ] Most AI re- +search has been conducted as if the computation available to the +agent were constant (in which case leveraging human knowledge +would be one of the only ways to improve performance) but, +over a slightly longer time than a typical research project, mas- +sively more computation inevitably becomes available. Seeking +an improvement that makes a difference in the shorter term, re- +searchers seek to leverage their human knowledge of the domain, +but the only thing that matters in the long run is the leveraging +of computation. [. . . ] One thing that should be learned from the +bitter lesson is the great power of general purpose methods, of +methods that continue to scale with increased computation even +as the available computation becomes very great.” +— Sutton (2019) +This insight seems to directly inspire most model choices presented in this +chapter. Each network can be seen as an attempt of its creators to employ their +vast available resources on a large scale, with a particular focus on dataset +sizes. This mostly becomes feasible through the adaptation of recent findings +in natural language processing (NLP; see chapter 2.1) to computer vision +(CV). On the one hand, architectural concepts firstly popularized in NLP are +translated to CV (e.g., self-supervised learning or the Vision Transformer; +Dosovitskiy et al., 2020b) (see chapter 2.2). On the other hand, these powerful +new NLP models, mostly Transformers (Vaswani et al., 2017b), support bigger +models from the inside as text encoding building blocks; hence the name of +this chapter. Throughout this chapter, we will introduce recent relevant CV +models CLIP (Radford et al., 2021a), ALIGN (Jia et al., 2021b) and Florence +(Yuan et al., 2021) and discuss their underlying core concepts. The strong +performances confirm the potential, hinted at by the impressive GPT-3 (Brown +et al., 2020), of improving CV and increasing scale with the help of NLP. + +148 +3 Multimodal architectures +3.4.2 +Concepts +3.4.2.1 +Web-scale data +A core problem that troubles researchers is the lack of robustness of previous +state-of-the-art CV models to distribution shifts. I.e., when a model with good +performance on its original dataset fails to generalize (transfer its knowledge) +to new, more or less similar datasets. E.g., Radford et al. (2021a) report +that a ResNet101 which they trained on ImageNet to an accuracy of 76.2% +maintains only an accuracy of 32.6% on ObjectNet. This suggests that the +model perhaps did not learn high quality latent representations, but instead +overfit to the dataset-specific data-generating distribution. A common way +to tackle this would be to try out various changes on the architecture and +the training algorithm of the network. But this kind of adaptation, inscribing +expert knowledge into the model, seems to repeat the mistake pointed out by +Sutton (2019); “micromanaging” a model is likely to thwart future scaling. +The researchers of CLIP, ALIGN and Florence follow a different approach, +based on scale. They try to increase sample size as much as possible and work +with tremendous numbers of training observations: +• 400 million (CLIP; Radford et al., 2021a) +• 900 million (Florence; Yuan et al., 2021) +• 1.8 billion (ALIGN; Jia et al., 2021b) +These large-scale dataset are generated using the vast amount of image-text +pairs produced by and readily available on the internet. Thus, error prone, +cost and labor intensive (difficult to scale), manual labeling is avoided. Un- +fortunately, the models trained on web data also become vulnerable to their +downsides. Because of their extremely noisy nature, still some form of pre- +processing is needed, e.g., filtering for English language, excluding graphic +content and, optionally, removing images with non-informative alt-texts. This +makes some degree of dataset curation, and therefore arbitrary choices, neces- +sary. Likewise, the social biases inherent to the internet are reproduced and +furthermore, while this approach improves data efficiency to some degree (see +next subsection 3.4.2.2), the poor performance of deep learning in this area +is not substantially enhanced and mainly just compensated for with a super +scalable source of supervision (Radford et al., 2021a). +3.4.2.2 +Contrastive objective +This source of supervision is the information contained in the co-occurrence +of the image with its alt-text. It is accessed through natural language super- +vision. The architectures jointly train two sub-networks for image and text +encoding, respectively. During this, the vector encodings are aligned in the +latent representation space through minimizing a variant of the contrastive +loss function (3.10) (Tian et al., 2020). Half of the first image-text pair loss + +3.4 Text supporting Vision Models +149 +ℓVimg,Vtxt +1 += − +E +{v1 +img,v1 +txt,...,vN +txt} +� +log +hθ({v1 +img, v1 +txt}) +hθ({v1 +img, v1 +txt}) + �N +k=2 hθ({v1 +img, vk +txt}) +� +, +(3.10) +where v1 +img and v1 +txt are vector encodings (latent representations) of image 1 and +text 1 and hθ(·) is a similarity measure. In order to guarantee symmetry, the +total loss is formed by the sum of ℓVimg,Vtxt +1 +and ℓVtxt,Vimg +1 +, where the pairwise +similarities of one text and every image is calculated instead of the other way +around. +Figure 3.52 visualizes this. Initially all images and texts in the training data +are encoded by the responsible sub-network. Using the resulting encodings, +a similarity matrix with elements hθ({vi +img, vj +txt}) can be calculated. Loosely +speaking, the contrastive objective is to maximize elements on the diagonal +and minimize the others. +FIGURE 3.52: Visualization of a contrastive objective (Radford et al., 2021a). +After encoding the data, a similarity matrix for the images and texts is +computed. The aim is that the N true image-text pairs score high in terms of +similarity, while the N2 − N other possible combinations score low. +Contrastive learning can be contrasted with classical predictive learning. Figure +3.53 gives an interesting insight into the choice of space, where goodness of fit +is measured. The exemplary task is to color an image given its B/W version. +Approach (a) first encodes the B/W image and then decodes the interim latent +representation to fitting colors. The goodness of this fit is measured in the +output space, meaning the estimated colors are compared to the true colors. + +pepperthe +Text +aussie pup +Encoder +T +T3 +TN +I1 +I,·T, +I,·T2 +I,·T3 +I,·TN +12 +I2·T, +I2·T2 +I2·T3 +I2·TN +Image +13 +I3·T, +I3·T2 +I3·T3 +I3·TN +Encoder +: +: +... +: +: +N +In·T, +In·T2 +In·T3 +IN·TN150 +3 Multimodal architectures +Conversely, approach (b) measures the loss in the representation space.6 A +reason for the good performance of contrastive learning could be that, while +common prediction losses (e.g., the L2 loss) penalize each prediction output +dimension independently, approach (b) implies measurement in the intertwined +representation space (Tian et al., 2020). +FIGURE 3.53: Predictive vs. contrastive learning: Predictive losses are +measured in the output space while contrastive losses are measured is in the +representation space, indicated by red dotted boxes (Tian et al., 2020). +But in the end, rather than theoretical considerations, the driving factor for +using this objective is data efficiency. As can be seen in figure 3.54, Radford +et al. (2021a) start their search for an adequate pre-trained model (more on +this in subsection 3.4.2.3) by experimenting with a Transformer-based language +model predicting the exact captions of an image. It turns out that this approach +trains three times slower, in terms of data efficiency, compared to a simpler +baseline of predicting a bag-of-words text encoding. Additionally, switching to +the contrastive objective of CLIP improves data efficiency by a factor of four. +Nonetheless, the switch to contrastive learning leads to some limitations. +Its rigidity demands certain extra steps and forfeits the high flexibility of +generative models. In particular, this means contrastive models similar to +CLIP are limited to choose from available options and cannot freely generate +texts or images. To extend the capabilities of those models additional network +building blocks are necessary. +6Note that contrastive learning easily works with other combinations of modalities than +text and image; here B/W and colors. + +V1 +2 +f +g +(a) Predictive learning +V1 +V2 +2 +fe1 +fe2 +(b) Contrastive learning3.4 Text supporting Vision Models +151 +FIGURE 3.54: Data efficiency of contrastive objective. Development of +zero-shot accuracy (see next subsection 3.4.2.3) on ImageNet with increasing +number of instances of training data processed by the models. The contrastive +objective reaches similar accuracy scores as the generative approach with only +a seventh of the amount of data (Radford et al., 2021a). +3.4.2.3 +Foundation models and zero-shooting +The first models which are considered foundation models today began to +appear in NLP. The term, later coined by Bommasani et al. (2021), refers +to models that are noteworthy due to their large scale and ability to adapt +to a wide variety of downstream tasks. An early example is BERT (Devlin +et al., 2018b). Often, foundation models have an unfinished touch to them +and the true scope of their capabilities cannot be sketched out clearly. This +generally is the case because the desired abilities of neural networks are not +designed for explicitly, but rather emerge during their implementation and +usage on downstream tasks. Bommasani et al. (2021) cite GPT-3’s ability +to perform certain types of new tasks solely by confronting it with the right +natural language prompt. E.g., it is possible to get GPT-3 to summarize a +paragraph by appending “TL;DR” (too long, didn’t read) to the prompt, which +is a common pattern on the internet to signal a following summery. This is +referred to as “in-context learning” (Brown et al., 2020). It is apparent that +one can make up plenty of unexpected ways to employ these models and it +remains unknown whether there is a further way no one thought of yet. This +means possibly saving computational and data collection costs down the line, +which ineptly is true for malicious use cases, e.g., surveillance, too. +Foundation models build on the concept of transfer-learning, i.e., pre-training +a model on a feasible source task and applying it to the desired downstream +task. In the context of this chapter this means pre-training on web-scale +data (see subsection 3.4.2.1) and evaluating performance on various common +classification datasets. E.g., Radford et al. (2021a) name the SVHN dataset + +40 +Accuracy +35 +30 +25 +20 +4x efficiency +3x efficiency +15 +10 +Bag of Words Contrastive (CLIP) +5 +Bag of Words Prediction +Transformer Language Model +0 +2M 33M 67M +134M +268M +400M +# of images processed152 +3 Multimodal architectures +as a proxy for the task “street number transcription” with the caveat “on +the distribution of Google Street View photos”, but they remark that a lot +of datasets have no obvious, specific task associated, e.g., CIFAR-10. They +use these kind of datasets for measuring the “robustness to distribution shift +and domain generation” of their model, which still is a topic of great interest +as mentioned in subsection 3.4.2.1. When there is no further fine-tuning on +the downstream task, i.e., no resuming of training on the new target dataset, +this is referred to as zero-shooting. Zero-shooting has the clear advantage of +evaluating performance more unbiased, as processes like overfitting to the +data-generating distribution will not distort results. +Figure 3.55 shows how contrastive models perform zero-shot transfer. In the +case of image classification all available classes are encoded by the language +model. Afterwards, the CV sub-network computes the encoding of the image +to be classified and all pair-wise similarity scores are returned. The pair with +the best score can be retrieved as the decision. Image retrieval works the other +way around: After an initial encoding of all images, the ones most similar to +the encoded natural language text prompt in the representation space can be +returned. +FIGURE 3.55: Visualization of zero-shooting (Radford et al., 2021a). + +plane +car +a photo of +Text +dog +a object}. +Encoder +bird +T1 +T2 +T3 +TN +Image +I, ·T, +I, ·T2 +I,·T3 +I, TN +Encoder +a photo of +adog.3.4 Text supporting Vision Models +153 +3.4.3 +Architectures +3.4.3.1 +CLIP +The first of the large scale contrastive CV models that were published is CLIP, +short for Contrastive Language-Image Pre-training (Radford et al., 2021a). +The components of its name are explained in previous subsections 3.4.2.2, +3.4.2.1 and 3.4.2.3 and are the crucial concepts of ALIGN and Florence as well. +CLIP is a product of OpenAI, but its code is freely available and the different +versions can be accessed as python modules. The dataset used for training is +not released though. +A lot of preliminary work stems from Zhang et al. (2020b), who introduced con- +trastive representation learning using image-text pairs. Their implementation +of the contrastive loss function (3.10) follows +ℓVimg,Vtxt +1 += − log +exp(⟨v1 +img, v1 +txt⟩/τ) +�N +k=1 exp(⟨v1 +img, vk +txt⟩/τ) +, +(3.11) +where ⟨v1 +img, v1 +txt⟩ represents the cosine similarity, i.e., v1⊤ +imgv1 +txt/(∥v1 +img∥∥v1 +txt∥), +and τ ∈ R+ is a temperature parameter, which is directly learned during +training (Zhang et al., 2020b). CLIP adopts this. ℓVtxt,Vimg +1 +, the counterpart to +ℓVimg,Vtxt +1 +for the total loss, is function (3.11) with switched arguments. This +can be viewed as a symmetric cross entropy loss over the cosine similarity of +the embeddings (Radford et al., 2021a). +Architecture +The text encoder for CLIP (see figure 3.53) is a modified Transformer (Vaswani +et al., 2017b), which was also used for GPT-2 (Radford et al., 2019b). For the +image encoder multiple sub-networks are evaluated: +• ResNets: ResNet-50, ResNet-101 +• ResNets which follow EfficientNet-style model scaling: RN50x4, RN50x16, +RN50x64 +• Vision Transformers: ViT-B/32, ViT-B/16, ViT-L/14 +The best performing sub-network was the ViT-L/14. In turn, they trained +it for an additional epoch with higher resolution images (336px), denoting +this version ViT-L/14@336px. If not indicated otherwise, the performances of +this version of CLIP are displayed. The EfficientNet-style ResNets use x4, x16 +and x64 of the compute of a ResNet-50 and the largest model (the RN50x64) +trained for 18 days on 592 V100 GPUs, while the ViT-L/14 only took 12 days +on 256 GPUs. The high parallelization capabilities of Transformers seem to +pay off. +When explaining zero-shooting initially (see subsection 3.4.2.3), a text pro- +cessing step was skipped. As can be seen in figure 3.55, there is an additional + +154 +3 Multimodal architectures +operation before the labels are fed into the text encoder. In order to help the +model understand the context of the words, the class labels are embedded in +a sentence, e.g., “A photo of a {label}.”. This increases the models zero-shot +accuracy on ImageNet by 1.3 percentage points (pp). When ensembling 80 +different context prompts7 Radford et al. (2021a) improve ImageNet accuracy +by an additional 3.5pp, which adds up to a total of nearly 5pp. The average +performance gain across 36 datasets is reported to be 5pp. It is similarly possi- +ble to directly communicate visual concepts like “picture”, “macro”, “drawing” +or even “dog” to the model. +Robustness +Figure 3.56 illustrates the performance of CLIP and a ResNet101, whose +training on ImageNet was stopped at the point it reached the same accuracy +as zero-shot CLIP. It can be deduced that the methods studied in the paper +of Radford et al. (2021a) constitute an important step towards closing the +robustness gap mentioned earlier (see subsection 3.4.2.1). While the perfor- +mance of the ResNet101 deteriorates with datasets generated from more and +more different data distributions compared to ImageNet, CLIP remains fairly +accurate. Note that these findings have to be taken with a grain of salt. Because +OpenAI does not grant public access to their training data, independent parties +cannot investigate these claims on their own. E.g., it has to be relied on the +conclusions of their overlap analysis to rule out that CLIP has not seen biasing +amounts of future test data during training. +FIGURE 3.56: Robustness of zero-shot CLIP to distribution shifts (Radford +et al., 2021a). +7Prompts like: “A photo of a big {label}.”, “A photo of a small {label}.” (Radford et al., +2021a) + +mageNet +Zero-Shot +DatasetExamples +ResNet101 +CLIP +△Score +ImageNet +76.2 +76.2 +0% +ImageNetV2 +64.3 +70.1 ++5.8% +ImageNet-R +37.7 +88.9 ++51.2% +ObjectNet +32.6 +72.3 ++39.7% +ImageNet +25.2 +60.2 ++35.0% +Sketch +ImageNet-A +2.7 +77.1 ++74.4%3.4 Text supporting Vision Models +155 +CLIP as a building block +Shen et al. (2021) study how the performance of Vision-and-Language (V&L) +models improves, when the visual encoder is switched to CLIP’s strong image +encoder. They discover that in this field of CV the ViT-B scores significantly +worse than the ResNets. E.g., tests on image captioning reveal that the V&L +model using ViT-B often performs only half as strong as the version using the +RN50x4 (the largest network used in this study). This is possibly due to the +pooling strategies of ViT-B, which result in a lack of visual localization abilities. +Shen et al. (2021) test their hypothesis and generate, e.g., figure 3.57 which +depicts Grad-CAM Visualizations for a V&L model with a ViT-B backbone +and a ResNet-50 backbone and the question “What color is the woman’s shirt +on the left?”. The red area indicates relevant pixels and appears much more +focused for CLIP-Res50 than for CLIP-ViT-B. +FIGURE 3.57: Grad-CAM Visualizations for the prompt “What color is the +woman’s shirt on the left?”. +3.4.3.2 +ALIGN +The approach of Jia et al. (2021b) is largely similar to CLIP. They reiterate +the necessity of large-scale vision datasets, but assert that even CLIP’s data +collection process still involves a non-trivial amount of data curation. They +propose that the amount of additional observations obtained through minimiz- +ing the amount of filtering makes up for the increased noise. Following this +rationale, they create a training dataset with 1.8 billion image-text pairs. The +corresponding model is named ALIGN, short for “A Large-scale ImaGe and +Noisy-text embedding”, whose acronym hints at the contrastive loss, which +aligns vector encodings in the representation space (see subsection 3.4.2.2). +Architecture +ALIGN follows the dual encoder architecture employed by Zhang et al. (2020b) +and Radford et al. (2021a), but uses a part of BERT-Large as the text and +EfficientNet-L2 as the image encoder, which they jointly train from scratch. +The model has around 800 million parameters (Alford, 2021). Subsection 3.4.4 +goes into more detail about the performance of ALIGN and compares all three +models discussed in this subsection. + +a) Original +(b) CLIP-ViT-B +(C) CLIP-Res50156 +3 Multimodal architectures +Connecting image and text representations +The contrastive loss function aligns the latent representations of the different +modalities. In other words, the explicit objective is that similar vector encod- +ings implicate similar inputs. This means arithmetic operations like the ones +mentioned in chapter 2.1 are not only meaningful on encodings belonging to +the same modality, but to different modalities. E.g., one can add up the image +encoding of a picture of the Eiffel tower and the text encoding of the word +“snow” and retrieve pictures with high cosine similarity as a result, see figure +3.58 for an illustration. +FIGURE 3.58: Multimodal image retrieval via arithmetic operations on +word and image embeddings. +3.4.3.3 +Florence +While in principle the approach of Yuan et al. (2021) does not largely differ +from the others, the focus of this paper is more about creating a true foun- +dation model. In order to achieve this, they propose a map of possible vision +applications which the try to cover via extending the core model with modules. +As figure 3.59 depicts, they want to advance into the dimensions of fine-grained +object detection, dynamic action recognition and true multimodal tasks. Due +to their big ambitions, they name their model Florence after “the birthplace +of Renaissance” (Yuan et al., 2021). +Architecture +As the two encoders for the pre-trained core they use a hierarchical Vision +Transformer (CoSwin Transformer) for images and a Transformer similar to + +3.4 Text supporting Vision Models +157 +FIGURE 3.59: Florence’ approach to foundation models: A general purpose +vision system for all tasks. +CLIP’s for text. Their 893 million parameters are also jointly trained from +scratch on 900 million image-text pairs. The alignment happens in the so called +image-label-description space which is encoded through a special version of the +contrastive loss function which regards all image-text pairs with the same label +as positive instances. Figure 3.60 depicts their version of figure 3.52 where one +can schematically see how they flexibly add modules to the pre-trained core in +order to adapt to various downstream tasks. +FIGURE 3.60: Modular architecture of Florence. +3.4.4 +Performance comparison +Throughout the papers of Radford et al. (2021a), Jia et al. (2021b) and Yuan +et al. (2021) we were able to collect three tables with reported performance +measures to compare these approaches. + +Modality +Multi-sense +Whatare they talking +Agroupofwomensitting +about? +around atable +Caption +Depth +Video Reasoning +Visual (only) +Static +Dynamic +How many red buttons? +VisualQuestion +Time +Answering +Coarse +Flower +Playing Soccer +Classification +ActionRecognition +Eagle +Fine-grained +tagle +Space +ObjectDetection +Segmentation +ObjectTrackingFlorence (Vision Foundation Model) +Florence Pretrained Models +Florence Adaptation Models +Retrieval +oat over heads +Language Encoder +on a docl +Classification/Retrieval Adaptation +- +log +Clas sific ation +Unified Vision Stack +Text +Object-level Representation +Unified Contrastive Learning +(Dynamic Head Adaptor) +Object Detection +Image-Text Dataset +Image Encoder (CoSwin) +Fine-grained V+L Representation +by Data Curation +(METER Adaptor) +from Internet +VQA +VideoRepresentation +Image +(Video CoSwin) +Action Recognition +Tasks +Scalable Training Infrastructure +Deployment158 +3 Multimodal architectures +Table 3.61 summarizes the zero-shot accuracies on four different ImageNet +variants. Unfortunately Yuan et al. (2021) only stated their performance on +the original ImageNet, where they beat CLIP and ALIGN by a margin of +7.3pp. The results on the other three ImageNet pendants are mixed and there +is no clear winner between CLIP and ALIGN. +FIGURE 3.61: Top-1 Accuracy of zero-shot transfer of models to image +classification on ImageNet and its variants. +Table 3.62 concerns zero-shot image retrieval on the Flickr30K and the +MSCOCO dataset (see chapter 2.3). Even though there are not many major +score differences, there is a clear ranking with CLIP on third, ALIGN on +second and Florence on the first place. +FIGURE 3.62: Zero-shot image and text retrieval (Yuan et al., 2021). +The most comprehensive comparison is shown in table 3.63. It depicts the +accuracy of zero-shot CLIP and Florence on various datasets as well as the +scores of all three models fine tuned to the respective datasets. Florence beats +CLIP in nearly all evaluations, for the zero-shot setting as well as for fine tuned +performance. Jia et al. (2021b) only report on four of these twelve datasets, +where they win half of the time. +Summing up, ALIGN achieves its goal of replicating CLIP’s impressive per- +formance while dramatically reducing the required data curation effort and +Florence has the overall top performance. This could be attributed to its +custom loss, maybe to Yuan et al. (2021) striking the best balance between +sample size and data curation or to Florence having the best sub-networks; or +a combination of all three. +Once again note that none of the training datasets were made publicly available. +It cannot be guaranteed that all benchmarks were evaluated on unseen datasets. + +Flickr30K (1K test set) +MSCOCO (5K test set) +Image→Text +Text→lmage +Image→Text +Text→Image +R@1 +R@5 +R@1 +R@5 +R@1 +R@5 +R@1 +R@5 +CLIP +88.0 +98.7 +68.7 +90.6 +58.4 +81.5 +37.8 +62.4 +ALIGN +88.6 +98.7 +75.7 +93.8 +58.6 +83.0 +45.6 +69.8 +Florence +90.9 +99.1 +76.7 +93.6 +64.7 +85.9 +47.2 +71.4ImageNet +ImageNet-R +ImageNet-A +ImageNet-V2 +CLIP +76.2 +88.9 +77.2 +70.1 +ALIGN +76.4 +92.2 +75.8 +70.1 +Florence +83.73.5 Models for both modalities +159 +FIGURE 3.63: Top-1 Accuracy of CLIP, Florence and ALIGN on various +datasets. +3.4.5 +Resources +One can access the pre-trained CLIP models on Github and they even found +their way into simple command line tools already. For example there is a CLI +named rclip, which can be used for personal image retrieval, wrapping the +ViT-B/32 CLIP architecture. On a (mid-range, regular) laptop, we were able to +find seemingly good matches for search terms which we tried out inside a folder +containing about 100 different pictures. After an initial caching one request +took about ten seconds. Furthermore CLIP continues to be used inside new +models, e.g., DALL·E 2, where it is used for the image embedding (Ramesh +et al., 2022b). Also, there is a crowd-sourcing effort to replicate CLIP’s training +dataset called LAION-400M (Schuhmann, 2022). To validate the image-text +pairs collected for this, their cosine similarity is computed using CLIP and +instances with a value too low are discarded. To our knowledge no resources +were open-sourced as part of the other two papers ALIGN and FLORENCE. +3.5 +Models for both modalities +Author: Steffen Jauch-Walser +Supervisor: Daniel Schalk +Data is naturally at the heart of every data scientific issue. While there have +been many advances made in machine learning in recent years, many promising +research areas remain, as do a multitude of problems associated with them. +One such promising area are multi-modal machine learning models. Combining +different input data is a key aspect towards making models more sophisticated. +When thinking about teaching robots specific tasks, detecting hateful memes +or deep fakes, it is apparent that only through the combination of multiple +modalities, success might be achieved. Context is key. +However, learning context requires increasingly complex models. While early + +Cars +Aircraft +Pets +CIFAR100 +Caltech101 +lowers102 +CIFAR10 +DC2007 +ImageNet +Food101 +SUN397 +Stanford +Oxford +CV +TD +& +D +D +CLIP +93.8 +95.7 +77.5 +68.4 +78.8 +37.2 +84.3 +55.7 +93.5 +92.8 +78.3 +76.2 +Florence +95.1 +94.6 +77.6 +77.0 +93.2 +55.5 +85.5 +66.4 +95.9 +94.7 +86.2 +83.7 +CLIP (fine tuned) +95.9 +97.9 +87.4 +82.2 +91.5 +71.6 +89.9 +83.0 +95.1 +96.0 +99.2 +85.4 +ALIGN (fine tuned) +95.9 +96.1 +96.2 +88.6 +Florence (fine tuned) +96.2 +97.6 +87.1 +84.2 +95.7 +83.9 +90.5 +86.0 +96.4 +96.6 +99.7 +90.1160 +3 Multimodal architectures +machine learning models built their success upon the possibility to analyze +the big pool of available, often unstructured data, modern machine learning +models are so demanding that there is often not enough data or training time +available. Obtaining data is a major issue for multi-modal machine learning. +Since labelling data in vast amounts is prohibitively expensive, larger models +have to come up with specific strategies to move forward such as self-supervised +training or automatically scraped web datasets. Nevertheless, when models +become so large that billions of parameters have to be learned, even scraping the +whole web starts to show its limits. Another natural issue is the transformation +of different types of data into usable model inputs. +There is no shortage of different single modality machine learning models. On +the contrary, when every new hyperparameter configuration might be seen +a new model, it becomes hard to keep track. More importantly, it is often +not clear how a model from one area transfers to another. Did we learn some +modality specific bias or a general principle? Consolidating different models +into a unifying framework is a key prospect of multimodal machine learning. +While the grand dream of a single unifying model might be out of reach, +consolidating different areas is well in sight. In the following, we will have a +look at the challenges and prospects of multimodal machine learning against +the background of visual language models. Visual Language Models are models +which can deal with both language and images as input data. Specifically, we will +have a closer look at three different models: Data2vec, VilBert and Flamingo. +Data2vec is an unsupervised model that can handle different modalities, but +not their interaction, using a single unifying training framework. VilBert is an +early visual-language model that can handle interactions between images and +text through its innovative concept of cross-attention. Flamingo is a recent +few shot visual language model that features large expressive text capabilities +through the use of a large language model. With 80B parameters, it particularly +highlights how to leverage the communication between frozen models when +further scaling up the model size. +An overview across the popularity of current research fields in visual language +modelling is provided in figure 3.64. A detailed list of trends for each of those +fields can be found in Uppal et al. (2022). Most research is done in the areas +of visual question answering (VQA) and visual captioning (VC), but also for +example visual commonsense reasoning (VCR), vision-language navigation +(VLN) or multimodal affective computing (MAC). MAC uses images and +text to infer sentiment, for example through facial expressions. VCR as an +extension of VQA is particularly interesting in the realm of making models more +interpretable. After all, we would like to know why machine learning models +do what they do. Finally, VLN has many promising practical applications in +the field of robotics, particularly the interaction of humans and robots. + +3.5 Models for both modalities +161 +FIGURE 3.64: Uppal et al. (2022): VisLang Paper Trends (previous 2 years) +3.5.1 +Data2vec +With data2vec (Baevski et al., 2022), data scientists at Meta, formerly Face- +book, developed an architecture that addresses some of the mentioned issues +while highlighting the importance of sophisticated training schemes. Their +algorithmic structure is able to work with either text, image or speech data. On +top of that, the model is self-supervised based on a teacher-student relationship +which reduces the need for human labelling. It is not a universal model in the +sense that it works with any input, nor is it even a general model in the sense +that the algorithm is exactly the same for each modality. However, the overall +model structure remains the same for either text, speech or image input data, +while only the specific encoding, normalization and masking strategies are +modality-specific. In that regard, it is a step towards a more general way of +dealing with different modalities and it is very effective at doing so given the +benchmark results on typical data sets. Particularly noteworthy is also the way +they implement the self-supervised learning. Data2vec predicts contextualized +and continuous representations rather than typically used discrete tokens such +as sub-words. Working with latent representations of the input space has two +advantages: not only is the number of prediction targets not a-priori limited, +but they are also richer in information. +Figure 3.65 depicts the general model architecture. The two main components +are a teacher and a student model which only differ in one aspect, the weights +of the teacher model are an exponentially decaying average of the student’s +weights. The purpose of the teacher model is to create training targets for the +student model. In a first step, a modality is chosen and inputs are encoded +according to the specific encoding scheme for that modality. A masked version + +Trends in VisLang Research +VCR +10% +VQA +MAC +25% +8% +VG +7% +VLN +10% +MMT +VC +6% +31% +VR +4%162 +3 Multimodal architectures +FIGURE 3.65: Baevski et al. (2022): Data2vec Architecture - a teacher +model creates contextualized latent targets on the basis of its top K layers +(blue) as prediction task to train the student model +is given to the student model, but notably, the teacher model has access to +an unmasked, complete view of the input data. Hence, the resulting training +targets will be fully contextualized using a self-attention mechanism over the +whole input data. The training targets are based on the top K layers of the +teacher model depicted in blue in Figure 3.65. More specifically, denoted by +yt, the training target at time t and by al +t the outputs of the l-th block, then +yt = 1 +K +�L +l=L−K+1 ˆal +t, i.e. the training targets are the average of the outputs of +the top K layers of the teacher network after a normalization has been applied. +Normalization helps to stabilize the training process and prevent model collapse +which can be an issue with models that learn their own representation. +From the authors point of view, working with a latent representation of +the actual learner as training target is a simplification of many commonly +used modality-specific designs despite the caveat that this paper still uses +modality-specific encoding strategies. Compared to other models, there is no +cross-modality training. The specific loss function used to regress the targets +is a smooth L1 loss. +L(yt, ft(x)) = +� +(yt−ft(x))2 +β +if +|(yt − ft(x))| ≤ β +|(yt − ft(x)| − β +2 +otherwise +Using a smooth L1 loss has the advantage of being continuous, yet sensitive to +outliers, however the β parameter needs tuning. As far as the general model +architecture is concerned, the underlying architecture is a standard transformer +architecture (Vaswani et al., 2017b). +How does the modality specific input handling work? +In many ways, in this work the authors combine the strategies developed +in multiple previous works and add a unifying framework on top of it. For +images, the typical Vision Transformer (ViT) strategy (3.66) to transform +images with a size of 224x224 pixels into 16x16 pixel patches is employed. +Every patch is then linearly transformed into a sequence of 196 flattened + +Images +Speech +Language +Model in teacher-mode +Original +I like tea with milk +Teachertracks +student +Predict model +Model in student-mode +parameters +representation of +original input +Masked +I like tea +milk3.5 Models for both modalities +163 +representations including a learn-able positional encoding that serve as input +to the vision transformer. A classification token is used to produce the final +categorization. The contextualization is produced in the multi-head attention +blocks as explained in earlier chapters. In short, multi-head attention first +projects the keys, queries and values with learned linear projections which are +then evaluated in parallel to create more expressive attention maps. Attention +itself is calculated as scaled dot-product-attention using a softmax over the +scaled product of keys, queries and values (Vaswani et al., 2017b). As far as +the vision transformer itself is concerned, datav2vec tests two different model +sizes, a base model size of 12 and a large model of 24 transformer blocks. The +masking strategy for images follows the Bert pre-training approach of image +transformers, BEiT, proposed by Bao et al. (2021). In particular, multiple +adjacent blocks are being masked with random aspect ratio. The minimum +size of a masked block is 16 patches. In total, 60% of patches were masked +in the data2vec algorithm, which is an increase over the original 40% used +by BEiT. However, the authors note that they found increased masking to +be more accurate. The augmentation strategies are similar, as well. Resizing +crops, horizontal flipping and colour jittering were used. Naturally, the student +and teacher model are the given the same modified image. Finally, for image +data, the model is measured on a classification task. Hence, the authors use +a mean-pooling over all patches in the last transformer block and input that +into a softmax-normalized projection that conducts the classification, which is +again based on the BEiT model. +FIGURE 3.66: Dosovitskiy et al. (2021) +The natural language processing model is implemented with a PyTorch toolkit +named fairseq and based on the RoBERTa (Liu et al., 2019b) architecture +which redesigned the standard Bert model training procedure to make it more + +Vision Transformer (ViT) +Transformer Encoder +Class +Lx +Bird +MLP +Ball +Head +Car +MLP +Norm +Transformer Encoder +Patch + Position +Multi-Head +Embedding +Attention +*Extra learnable +[class] embedding +Linear Projection of Flattened Patches +Norm +Embedded +Patches164 +3 Multimodal architectures +robust and effective. In particular, it increases hyperparameters such as the +learning rate and the batch size. It also removes the next sentence prediction +task to improve on the masked language modelling performance. In this case +they follow Sennrich et al. (2015b) and encode sub-words as 50k byte-pairs. +A separate embedding vector is learned for each type. For the masking, the +Bert masking is being used. 15% of the embedded tokens are replaced, thereof +80 percent are learned masks, 10% are unchanged and the remaining 10% +are replaced with random tokens in the vocabulary. Another strategy that +the authors also consider is the wave2vec masking strategy’ to mask four +consecutive tokens with a probability of 0.35 while only using learned tokens +(Baevski et al., 2020). As it turns out, the later strategy further improves the +results. The natural language processing model is evaluated on the General +Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018) +which includes for example includes NLP inference, sentence similarity and +sentiment analysis tasks. +The speech category is also implemented in fairseq. The feature encoder for +speech is based on the wave2vec framework and uses 16 kHz inputs. It is built +upon seven temporal convolutions intertwined with normalization layers and a +GELU activation function such that the output of the encoder is 50 kHz. +As far as the results are concerend, data2vec achieved state-of-the-art perfor- +mance in vision and language tasks among similar self-supervised models. +FIGURE 3.67: Baevski et al. (2022): data2vec performance (vision) +Figure 3.67 shows the model’s performance in computer vision. Pre-trained +and fine-tuned simply on the data of the well known ImageNet-1K dataset, +data2vec was evaluated using top1-accuracy, the standard notion of accuracy, +on the task to predict single labels for images. The base model ViT-B comprises +86M parameters and ViT-L 307M parameters. The results show that predicting +contextualized latent representations in a masked prediction setup can work +well as model training compared to classical local methods such as predicting +visual tokens. MoCov3 (Chen et al., 2021) is a self-supervised model trained + +ViT-B +ViT-L +Multiple models +BEiT (Bao et al.. 2021) +83.2 +85.2 +PeCo (Dong et al., 2022) +84.5 +86.5 +Single models +MoCo v3 (Chen et al., 2021b) +83.2 +84.1 +DINO (Caron et al., 2021) +82.8 +MAE (He et al., 2021) +83.6 +85.9 +SimMIM (Xie et al., 2021) +83.8 +iBOT (Zhou et al., 2021) +83.8 +MaskFeat (Wei et al., 2021) +84.0 +85.7 +data2vec +84.2 +86.63.5 Models for both modalities +165 +on a contrastive loss. The most similar model is DINO (Caron et al., 2021), +which also uses a self-distillation setup to predict teacher outputs using a cross- +entropy loss. However, their prediction target was the final layer rather than +averaged layers while using differing images for teacher and student network. +The well performing MAE model (He et al., 2022) is a masked autoencoder +which is trained on reconstructing masked pixels using an asymmetric encoder- +decoder architecture. In contrast, MaskFeat (Wei et al., 2022) uses masked +feature prediction. Notably, data2vec outperforms all of them although trained +for the same amount or less. Particularly, MAE and MaskFeat use 1600 epochs +rather than 800 like data2vec. +FIGURE 3.68: (res:data2vecresults2) +(res:data2vecresults2) Baevski et al. (2022): data2vec results (language) +Figure 3.68 shows the performance in the language domain. For the language +domain, the model is evaluated on the GLUE benchmark (Wang et al., 2018). +The model is pre-trained and fine-tuned separately on the labelled data from +each task. Accuracy is reported as the average across 5 tuning cycles. While +data2vec achieves a higher average performance than the baseline model, +there are tasks where the baseline model prevails. A large portion of the +performance difference seems to be driven by the CoLA task. The Corpus of +Linguistic Acceptability (CoLA) consists of 10657 sentences from 23 linguistics +publications and the task is to judge whether they are grammatically correct. +Hence, it is distinctly different from the other tasks. The Stanford Sentiment +Treebank (SST) analyzes sentiment in language through movie reviews. The +Multi-Genre Natural Language Inference (MultiNLI) corpus contains sentence + +Table 3. Natural language processing: GLUE results on the development set for single-task fine-tuning of individual models. For MNLI +we report accuracy on both the matched and unmatched dev sets, for MRPC and QQP, we report the unweighted average of accuracy and +F1, for STS-B the unweighted average of Pearson and Spearman correlation, for CoLA we report Matthews correlation and for all other +tasks we report accuracy. BERT Base results are from Wu et al. (202O) and our baseline is RoBERTa re-trained in a similar setup as BERT. +We also report results with wav2vec 2.0 style masking of spans of four BPE tokens with no unmasked tokens or random targets. +MNLI +IINO +RTE +MRPC +QQP +STS-B +CoLA +SST +Avg. +BERT (Devlin et al., 2019) +84.0/84.4 +89.0 +61.0 +86.3 +89.1 +89.5 +57.3 +93.0 +80.7 +Baseline (Liu et al., 2019) +84.1/83.9 +90.4 +69.3 +89.0 +89.3 +88.9 +56.8 +92.3 +82.5 +data2vec +83.2/83.0 +90.9 +67.0 +90.2 +89.1 +87.2 +62.2 +91.8 +82.7 ++ wav2vec 2.0 masking +82.8/83.4 +91.1 +69.9 +90.0 +89.0 +87.7 +60.3 +92.4 +82.9 +40 +Top-1 valid accuracy +Word error rate +GLUE score +80 +84 +30 +70 +82 +20 +60 +80 +67 +9 101112 +12345 +6 +9 10 11 12 +8 +9 10 11 12 +K +K +K +(a) Speech +(b) NLP +(c) Vision +Figure 2. Predicting targets which are the average of multiple layers is more robust than predicting only the top most layer (K = 1) +for most modalities. We show the performance of predicting the average of K teacher layer representations (s3.3). The effect is very +pronounced for speech and NLP while for vision there is still a slight advantage of predicting more than a single layer.166 +3 Multimodal architectures +pairs and focusses on textual entailment across genres. Similar tasks are used +in the Recognizing Textual Entailment (RTE) dataset which focuses on text +from news and Wikipedia. The QNLI (Question-answering NLI) dataset is +a Natural Language Inference dataset that contains answers from Wikipedia +to corresponding questions posed by an annotator. The task for the model is +to find out whether the sentence contains the answer to the question. QQP +stands for Quora Question Pairs, which analyzes paraphrases. Finally, the +Microsoft Research Paraphrase Corpus (MRPC) also consists of sentence pairs +from newswires which may or may not be paraphrases of each other. +As a suitable baseline model, the authors retrain RoBERTa in the respective +setup. On top of the heterogeneous performance across language tasks, the +evaluation also clearly shows that averaging over multiple layers to create +prediction targets improves performance across all three domains. The effects +seem to be most pronounced on NLP tasks whereas CV does not benefit from +averaging more than three layers. In the speech domain, six layers seems to +be enough to reach peak performance. In any case, performance loss while +following the strategy to simply average the maximum amount of layers, rather +than fine-tuning K, seems small enough to be potentially acceptable. +To sum it up, data2vec is a self-supervised model that can work with either text, +speech or image data, but not across modalities. It aims at unifying the learning +framework through a teacher-student-setup that allows for contextualized latent +target prediction. The teacher model is based on a complete view of the input +data, which introduces contextualization, while the student model only sees a +masked version of the input. Compared to previous work, the authors average +the top K layers rather than only the final layer of the model, which has a +notable effect as shown in 3.68. As there are different layers in the transformer +network, the authors also investigate which layers work best for prediction. +They conclude that the output of the feedforward layer works best. Built +on a transformer architecture, self-attention is the main driver that creates +contextualized targets in the teacher model and hence performance. The +authors also show that contextualization through the teacher model works +best with the complete view of the input rather than a partial view. On top of +not being able to work across modalities, one drawback is that the model’s +structure still uses modality specific encoding and masking schemes. In that +regard, the perceiver architecture (Jaegle et al., 2021a) for example used in +the Flamingo model is a complementary approach worth exploring. An earlier +model that works across modalities is VilBert. +3.5.2 +Vision-and-Language Bert (VilBert) +As seen in the previous section, data2vec can handle text, image or speech as +input data. However, it cannot do so at the same time. The model’s focus is on +unifying the training approach rather than working across modalities. However, +when we think about multimodal models, we usually think of working with + +3.5 Models for both modalities +167 +different modalities at the same time. VilBert (Lu et al., 2019b) is a natural +extension of the iconic Bert architecture (Devlin et al., 2018c) to vision-and- +language modelling. An immediate question is whether vision and language +inputs should be handled together in a single stream or in parallel. As we will +see, it turns out that encoding inputs in parallel and working with parallel +streams increases performance. At heart of that architecture is a co-attention +mechanism which enables information exchange between both modalities. +FIGURE 3.69: Lu et al. (2019b): VilBert’s Dual Stream Architecture: dashed +transformer modules can be repeated, co-attention modules allow sparse inter- +action between modalities. +Figure 3.69 shows the employed parallel stream architecture. Each modality +is handled separately and fed into two Bert-style transformer models. This +allows for both modalities to be handled according to their respective needs +while co-attention layers allow for communication between the streams. For the +language stream, the encoding uses the vocabulary plus a special classification +token (cls), a sentence separation token (sep) and a masking token (mask). +For the vision stream, image region features are extracted via a Faster R-CNN +(Ren et al., 2015) model which was pre-trained on the Visual Genome Dataset +(Krishna et al., 2016). Since image regions lack a natural ordering, its spatial +location has to be encoded, as well. VilBert achieves that through a five +dimensional vector that encapsulates the image coordinates and the fraction of +the covered image area. Through projection, the dimensions of the positional +encoding and visual features are matched and then summed. The image token +marks the beginning of such an image region sequence while representing the +whole image. +Through the dual stream architecture, the complexity of the model can be +adjusted separately for each modality. An alternative approach would have to +discretize the visual space via clustering and then use the resulting tokens in the +same way as text tokens. The drawbacks of that approach are the potential loss +of detail at the discretization stage and the loss of flexibility across modalities +as a result of the same processing. Finally, a single stream architecture can +interfere with the pre-training of the language models. The model will have +to be fine-tuned based on the created visual tokens. As those might be very +different from the text tokens, there is potential for the pre-trained language +model to be become ‘damaged’ in the process and lose capabilities - and idea +that is also central to the Flamingo model presented later on. + +Vo +V1 +V2 +V3 +VT +Embed +Co-TRM +TRM +hvo, hv1, .., hvT + +- + +Man shopping +for +fruit + +Embed +TRM +Co-TRM +TRM +hwo, hw,.,hw +Wo +W1 +W2 +W3 +W4 +WT +L-k × +k×168 +3 Multimodal architectures +FIGURE 3.70: Lu et al. (2019b): Cross-Attention in VilBert +The key innovation in the Vilbert paper (Lu et al., 2019b) is the use of +co-attention layers. In figure 3.70, the basic architecture is depicted. The +co-attention module computes query, key and value matrices in a standard +transformer attention fashion. However, it then feeds the keys and values from +each modality into the other modalities multi-head-attention block. As a result, +the visual attention will be conditioned on text whereas the language attention +will be image-conditioned. This communication between streams only occurs +at specific sections in the model, denoted by co-trm in figure 3.69. Notably, the +language stream features a lot more preprocessing before the first co-attention +layer than the image stream. +An interesting question to ask is what is actually learned in those attention +layers and how they correspond to human attention maps. (Sikarwar and +Kreiman, 2022) analyze the efficacy of co-attention layers for VQA tasks +in a VilBert network. Specifically, they compute the question conditioned +image attention scores and compare them to human attention maps created in +experiments. In those experiments, humans are tasked with unblurring specific +image regions to answer the same questions one would expect the machine +learning model to answer. Such human attention maps are collected in the +VQA-HAT dataset (Das et al., 2017). Rank correlation is used to compare +attention maps. Sikarwar and Kreiman (2022) find that in a 6 layer network +rank correlation plateaus at layer 4 and increases in the number of image +regions proposed while encoding the images. Perhaps more surprisingly, they +find a minimal influence of semantics on the generation of the attention maps. +Randomly shuffling words in a sentence when testing the model performance +barely changes the attention output, which suggests that keywords rather +than sentence structures drive the attention output. Note however that while +attention maps remained similar, the model’s actual performance on answering +the questions dropped notably by approximately 15% such that it seems clear + +H(l+1) +Add & Norm + Add & Norm + Add & Norm +Feed Forward +Feed Forward +Feed Forward + Add & Norm +Add & Norm +Add & Norm +Multi-Head +Multi-Head +Multi-Head +Attention +Attention +Attention +K + Visual +Linguistic +H(l) +(a) Standard encoder transformer block +(b) Our co-attention transformer layer +Figure 2: We introduce a novel co-attention mechanism based on the transformer architecture. By +exchanging key-value pairs in multi-headed attention, this structure enables vision-attended language +features to be incorporated into visual representations (and vice versa)3.5 Models for both modalities +169 +that coherent sentences are important for the overall VQA task, but not for the +attention creation process. What are the keyword that drive cross-attention +in VilBert? The evidence provided by the authors clearly shows that nouns +are the most influential parts-of-speech when considering attention maps. On +top of that, prepositions can sometimes help identify spatial relations. There +is also some support for the hypothesis that removing Wh-words such as +“who” and “where” can improve fine-grained attention maps in the final layer +which might be worth exploring further as preprocessing for deeper networks. +Another approach would be to search for ways to improve the way attention +maps are generated by finding ways to include more of the available sentence +information. Most notably, however, using object-based region proposals to +process images can lead to bottlenecks that can prevent the model from learning +sufficiently fine-grained attention maps as shown in figure 3.71. Overall, humans +are naturally good at VQA tasks. Hence, it is not surprising that attention +maps which correlate well with human attention maps also improve model +performance. +FIGURE 3.71: Sikarwar and Kreiman (2022): (Left to Right) Picture, Human +Attention, 36 Regions, 72 Regions, 108 Regions. Similarity between human +and model attention is measured using rank correlation. +Figure 3.71 shows that the number of region proposals fed into the model +after processing an image affects the ability of the model to produce adequate +attention maps. In this particular case the question “How many fingers is the +girl in the black shirt holding up?” was correctly answered by humans, as +well as a VilBert model using 72 or 108 region proposals. It was answered +incorrectly when using only 36 region proposals. Note however that in either +case, the machine learning model captured the face of the wrong girl. The +model using 72 regions also identified the wrong hand despite answering the +question correctly. While the 108 region model identifies the correct hand +holding up the fingers, it does not seem to prioritize it over the other identified +hands in the picture. Hence, the attention maps are sufficiently different from +the human attention map which highlights the need to look closer not only +at how models are performing, but also into how their performance has been +achieved. +As far as the model training is concerned, VilBert is pre-trained and fine-tuned. + +A: 2 +A:1X +A:2 +A:2v +p:0.471 +p:0.564 +p:0.64 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0170 +3 Multimodal architectures +The pre-training tasks comprise masked-multi-modal modelling and multi- +modal alignment prediction performed on the Conceptual Captions dataset. +That dataset contains about 3,1 million usable aligned image-caption pairs, +which have been automatically scraped from web images. For the alignment +task, the authors create unaligned images by randomly mismatching captions +and images. For the masking task, 15% of the both the visual and language +tokens are masked. The task is to reconstruct the mask from the remaining +input in a classical Bert fashion. While the text masks are directly regressed +like in Bert, the model predicts distributions over semantic classes for the +image regions. This is achieved through minimizing the KL divergence, a +measure for the similarity of distributions, between the output distribution of +the pre-trained model used in feature extraction and the VilBert predictions. +The performance results are depicted in figure 3.72. +FIGURE 3.72: Lu et al. (2019b): VilBert Performance +As mentioned before, the dual stream architecture outperforms the single +stream architecture. Furthermore, pre-training considerably boosts perfor- +mance, as does fine-tuning. Interestingly, the authors also study the effect +of the size of the dataset and effect of the architecture depth. Performance +increases monotonically with dataset size, suggesting that performance can +be further improved with more data. The results on the optimal layer depth +are task dependent. VQA and Image Retrieval reach peak performance at 6 +layers, where a layer denotes a repeatable block as depicted in figure 3.69. Zero +Shot Image retrieval greatly benefits from even deeper depth. However, the +VCR and RefCOCO+ tasks seemingly benefit from shallower models. The +VQA task is based on the VQA 2.0 dataset. Each image must be matched +to one of ten answers. Hence, the VQA task is not open-ended, but treated +like a classification task. To achieve that, the model is amended by two MLP +layers which use the element-wise product of the model-generated img and +cls tokens. The VCR task is also posed as a multiple choice problem with +images from movie scenes. To fine-tune for the task, questions and answers are +concatenated into four different text input and given as model input together + +Table 1: Transfer task results for our ViLBERT model compared with existing state-of-the-art and +sensible architectural ablations. + indicates models without pretraining on Conceptual Captions. For +VCR and VQA which have private test sets, we report test results (in parentheses) only for our full +model. Our full ViLBERT model outperforms task-specific state-of-the-art models across all tasks. +VQA +VCR 25 +RefCOCO+ [32] +Image Retrieval [26 +ZS Image Retrieval +Method +test-dev (test-std) +Q→A +QA→R +Q→AR +val +testA +testB +R1 +R5 +R10 +R1 +R5 +R10 +DFAF B4 +70.22 (70.34) +- +- +R2C 25 +63.8 (65.1) +67.2 (67.3) +43.1 (44.0) +MAttNet 33 +65.33 +71.62 +56.02 +SCAN B5 +48.60 +- +77.70 +85.20 +- +- +Single-Streamt +65.90 +68.15 +68.89 +47.27 +65.64 +72.02 +56.04 +- +Single-Stream +68.85 +71.09 +73.93 +52.73 +T +69.21 +75.32 +61.02 +ViLBERTt +68.93 +69.26 +71.01 +49.48 +68.61 +75.97 +58.44 +45.50 +76.78 +85.02 +0.00 +0.00 +0.00 +ViLBERT +70.55 (70.92) +72.42 (73.3) +74.47 (74.0) +54.04 (54.8) +72.34 +78.52 +62.61 +58.20 +84.90 +91.52 +31.86 +61.12 +72.803.5 Models for both modalities +171 +with the image. In the end, four scores are generated accordingly and selected +through softmax. The RefCoCO+ task is a grounding task. An image region +has to be selected according to a natural language reference. Caption-Based +Image Retrieval requires the model to find an image that corresponds to a +selected caption. The dataset used is the Flickr30k dataset which contains 30 +000 pictures with five captions that are of higher quality than the automatically +generated captions from web data. +3.5.3 +Flamingo +The VilBert model showed one way how to actually combine visual and language +inputs. In contrast, data2vec showed how to design an unsupervised model and +how influential the actual training process as well as contextualization can be. +A natural question to ask is then is whether we can build a truly multimodal +architecture like VilBert that is self-supervised like data2vec or at little task- +specific training and how to optimized its training procedure. In particular, +both VilBert and data2vec were tested on multiple tasks, but each task needs +slight re-adjustments to the model as well as additional fine-tuning. Ideally, +a multimodal architecture would not only be efficient in its initial training, +but also easily adaptable to different tasks. Finding ways to not only work +with different input modalities, but also with different task is crucial towards +building a more general AI. A promising approach in that direction is few +shot learning. The following section presents Flamingo (Alayrac et al., 2022), +a few shot multimodal architecture developed by Google which comprises key +innovations such as handling arbitrarily interleaved vislang sequences as inputs, +as well as ways to effectively combine pre-trained vision-only and language-only +models. As such, it is a visually conditioned autoregressive text generation +model. +Figure 3.73 demonstrates Flamingos capabilities. It can function as chat bot, +describe pictures, work with image sequences (videos) and in doing so, simply +needs a few prompts. +At the heart of the model is a large language model, Chinchilla (Hoffmann +et al., 2022), with 70B parameters. Large language models such as GPT-3 +(Brown et al., 2020), as their name suggests, can be trained on a large amount +of text data which gives them impressive text generative capabilities. However, +multimodal generative modelling presents some specific challenges not present +in language-only modelling. First of all, training large language models is +expensive. Hence, it is paramount to work with a pre-trained version, but +trying to teach a large language model the means to work with visual inputs, +as well, has the potential to deteriorate or destabilize the pre-trained model. +Second, large language models can suffer from memory constraints that are +potentially severely aggravated by simply adding high-dimensional visual data +into an input sequence. Third, good generalist capabilities typically require a +huge amount of heterogeneous training data. There might not exist enough + +172 +3 Multimodal architectures +FIGURE 3.73: Alayrac et al. (2022): Flamingo Prompt-Output-Examples +labelled image-caption-pair data to successfully accomplish training a capable +few shot learning model in the vision-and-language domain. To train Flamingo, +the authors solve these challenges by foremost exploring ways to generate +their own web-scraped multimodal data set similar to existing ones in the +language-only domain. Furthermore, they use a perceiver architecture (Jaegle +et al., 2021a) that resamples inputs into a fixed amount of visual tokens. Finally, +the self-attention layers of the language model are kept frozen during training +while cross-attention layers are interleaved. A gating mechanism ensures that +those new cross-attention layers do not interfere at model initialization, thereby +improving stability and final performance. +Figure 3.74 shows the fundamental architecture of Flamingo. A pre-trained +vision model as well as a pre-trained language model are frozen. Together +they built the cornerstones of the model. The vision model is pre-trained +using a contrastive text-image approach. Its role is to extract features such +as colour, shape, nature and the position of objects - typical semantic spatial +features that one would use in querying. The language model is an existing +pre-trained language model. On top of those frozen parts, the authors add a +perceiver-resampler and gated cross-attention layers as learnable architectures. +The perceiver-resampler turns the outputs of the vision model into a fix set of +visual tokens. Those visual tokens are then used to create cross-attention layers +which are interleaved into the frozen language model. As a result, Flamingo + +pandas: 3 +dogs: 2 +giraffes: 4 +I like reading +,myfavouriteplayis +Dreams frommy +,myfavorite book is +Hamlet. I also like +Father. +What happens tothe +man after hitting the +he falls down. +ball? Answer: +This is a picture of two teddy +Pbears on themoon. +This is an apple with a sticker +on it. +What are they doing? +What does the sticker say? +PThey are having a conversation. +PThe sticker says"iPod". +What object are they using? +What is the common thing +Where is the photo taken? +PItlooks likea computer. +about these three images? +It looks like it's taken in a +Is this surprising? +PThey are allflamingos. +P +backyard. +P- Yes, it is surprising. +What is the difference between +Do you think it is printed or +handwritten? +Why is this picture surprising +these three images? +to you? +PIt looks like it's handwritten. +The first one is a cartoon, the +I think it is surprising because +second one is a real flamingo, +What color is the sticker? +teddy bears are not usually +and the third one is a 3D +model of a flamingo. +P It's white. +P +found on the moon. +P3.5 Models for both modalities +173 +FIGURE 3.74: Alayrac et al. (2022): Flamingo Model Structure +can model the likelihood of some text y interleaved with a sequence of images +or videos x as +p(y|x) = +L +� +l=1 +p(yl|yThisisaverycutedog.Thisis +Input:textand visual +data interleaved +This is a very cute dog +This is +Visual data +processing174 +3 Multimodal architectures +scope of the dataset, M3M contains about 182GB of text as well as roughly 185 +million images. The authors pay special attention not to include traditional +task-specific datasets curated particularly for machine learning purposes to +guarantee the generality of their modelling approach. As a second important +dataset, aligned image text pairs are used. In particular, the ALIGN dataset +(Jia et al., 2021b). The dataset is further augmented with Long Text and +Image Pairs (LTIP) as well as Video and Text Pairs (VTP). The later datasets +contain more descriptive captions than ALIGN. Together the process ensures +that the available training datasets are sufficiently large and heterogeneous - +two key properties necessary to hopefully achieve good few shot performance. +The training objective is to minimize the weighted sum of dataset specific +expected negative log likelihood. +Each dataset is weighted with a scalar λ as datasets can be of different quality +or feature different properties. Hence, it might be preferable to pay different +attention to different datasets. According to the authors, tuning these weights +was essential to overall performance. In practice, the optimization works as +follows: a sample batch with visual language sequences from each dataset is +used to compute the gradient of the loss in accordance to the weight of the +dataset. Importantly, the authors find that it is beneficial to accumulate the +gradients of all datasets before triggering an updating process. Naturally, the +actual datasets used to train the models are extremely crucial, as well. In their +ablation studies, the authors find that removing their web-scraped multimodal +dataset from the training pool drops model performance as measured across +all selected tasks from a score of 68.4 to 46.9. Removing the dataset containing +aligned captions and images drops performance to a score of 56.5 and not +accumulating gradients before the updating process decreases performance to +59.7. +Taking a closer look at the model architecture, the two key structures are the +perceiver resampler and the attention layers. Figure 3.75 shows the architecture +of the perceiver (Jaegle et al., 2021a). Before the input data reaches the +perceiver, it is processed by the vision encoder - a Normalizer-Free ResNet +which is trained with a constrastive loss similar to the well known Clip model +(Radford et al., 2021a) and yields a good trade-off between performance and +efficiency. The output of the vision encoder is a 2D grid which is than flatted +before being fed into the perceiver that connects the vision encoder with the +frozen language model. The resampling performed by the perceiver-resampler +is crucial to reduce the complexity of vision-text cross-attention in the next +step. This is particularly notable for video inputs. Inside the perceiver, a set +of learned latent queries cross attend to the flattened vision encoder output. +The number of outputs generated by the perceiver is equal to the number + +M +Z1 +log p(yely +loc299> loc126> loc282>loc159> +Visualize +white shirt" describe? +■ +西 +img8192> +口4oc1000 +GC: What does the region describe? region: +Text vocab. +Image vocab. +Location vocab +loc126>loc282>loc159> +Man in white shirt +ITM: Does the image describe "Two boys +playing frisbee on the grass" ? +Yes +Image Captioning: What does the image +OFA +describe? +Two boys playing frisbee on the grass +Masking +VQA: How many people are there in the +Two +picture? +Detection: What are the objects in the +Tasks +carperson. +Image Infilling: What is the image in the +Image-Text Matching ++ +middlepart? +i Visual Question Answering : +Image Captioning +... +Decoderi* +Object Detection +Text Inflling: What is the complete text of + Image Infilling +“A woman" ? +Text Infilling +A beautiful woman +Vision & Language Tasks +Vision Tasks +Language Tasks216 +4 Further Topics +added to distinguish the observations from the following action, so that a +sequence looks simplified as the following: +� +... [xText, xImages, xDiscrete and Continuous Values, |, yAction]i , ... +� +By using this approach, the transformer can predict the next action autoregres- +sively since it is a sequential problem. In the case of text, the action token is +also a text token. Since it is only necessary to predict the action based on the +previous values, a mask function is added to the cross-entropy loss function, +which masks the previous values so that only the next action is predicted and +not the conditions for the action. The masking function is always one for text +since every previous text token is necessary for language modeling. +Gato was evaluated on reinforcement-learning-based (RL) tasks against spe- +cialist RL agents, where Gato performed worse than the specialist agents. On +unseen tasks, Gato required fine-tuning since few-shot learning is not feasible +due to the input length restrictions in transformers. However, the results were +mixed. Some improvements were possible, and the expert was outperformed, +but in other cases, massive fine-tuning efforts only led to small gains. It was +found that the generalist agent outperformed the specialist agent (particularly +trained for this task) most of the time. Only at the specific Atari Boxing +(Bellemare et al., 2013) task, Gato was outperformed by the specialist Gato +model. Both performed much lower than another task-specific model used as a +baseline. In robotics, Gato showed comparable behavior to the baseline SOTA +model. Additionally, Gato also showed capabilities in image captioning and +dialogue modeling, although these aspects were not elaborated further. +Like OFA, Gato can sequentialize all input and produce a sequential output +that can be back-transformed to solve a task. It was shown that Gato could +sometimes transfer knowledge on unseen tasks and outperform the specialist +agent most of the time. +4.3.2.5 +Comparison +Although many tasks and modalities lead to a curse of dimensionality for +comparison, the architectures and the respective modifications of the introduced +systems remain simple to compare. +A trend toward seq2seq models can be seen with MultiModel, OFA, and +Gato solving tasks in a seq2seq manner. The most prominent similarity is +the transformer architecture used entirely (encoder & decoder) in OFA and +truncated (decoder only) in Gato. Another significant similarity between both +architectures is the use of a particular ordering of input and output. In Gato, +the sequence is organized around predicting an action using a special token, +while OFA produces a sequence as a solution which can be the bounding box +or the sequence of an image to be fed in the generator module. While Gato +can solve tasks from robotics and game playing, OFA can also generate images. + +4.3 Multipurpose Models +217 +However, both architectures require specific modules to decode the tokens into +the respective modality. +Gato and OFA both use a shared representation space. Minor details differ, so +the image tokenization process is different, and additionally, Gato can encode +more modalities than the published version of OFA (although extending OFA +is theoretically simple). +MultiModel also show some familiar characteristics. The architecture is from +the pre-transformer age but also brings many characteristics of the transformer +architecture, like the use of attention, positional encodings, and encoder- +decoder. Since the output in the presented version only produced text or +classification separately, there is no need for special orderings used in OFA and +Gato. The necessity to produce the modality-specific output in modality nets +is similar to the generator module in OFA that produces images. However, the +tokens are already produced in an intermediate step in OFA, while the modality +nets are crucial to producing the final output in MultiModel. UniT follows an +entirely different approach that is more pragmatic by leveraging the contextual +capabilities of the transformer decoder. M modalities can be encoded as a +sequence on which the transformer decoder fuses the modalities and learns the +relationships. The use of special tokens for each task and task-specific heads, +focus the model on the requested task yet also requires tuning the model +specifically. +None of the models besides OFA achieved SOTA results. Compared to specialist +models, the general models were comparable in their results (Gato, UniT, +MultiModel). MultiModel, OFA, and Gato showed transferability on low- +resource or unseen tasks. However, more research in this direction is highly +recommended. MultiModel was only compared on a low-resource task against a +specialist model, and OFA was not compared to another model for the unseen +task. Gato performed better than a specialist model, trained from scratch on +most unseen tasks, but failed against the untrained specialist model in Atari +Boxing. +Model +Approach +Modalities +Outperformed +Special- +ist +Model? +Unseen +Tasks? +Number +of Pa- +rameters +Year +OFA +Seq2Seq +Vision, +Text +Yes +33M- +930M +2022 + +218 +4 Further Topics +Model +Approach +Modalities +Outperformed +Special- +ist +Model? +Unseen +Tasks? +Number +of Pa- +rameters +Year +Gato +Seq2Seq +Vision, +Text, +Robotics, +Discrete +Entities +(e.g., +Buttons) +In most +cases +Yes +79M- +1.18B +2022 +UniT +m En- +coders, +task- +specific +head +Vision, +Text +No +No +201M +2021 +MultiModelDifferent +modality +nets for +Seq2Seq +Vision, +Text, +Audio, +Categori- +cal +ComparableExcelled +on low +resource +task +Unknown +2017 +Comparing the models among each other becomes difficult with more modalities +and tasks, which is its own curse of dimensionality. For example, Gato also +included robotics and RL, which none of the other models included. MultiModel +also has a modality net for sound, while UniT and OFA only worked for vision +and text. Further research into the comparability of multipurpose models +becomes essential. +4.3.3 +Pathways and Promising Works +Although models have become more capable of solving complex tasks, significant +limitations remain. A persisting issue in current deep learning is the necessity +to train from scratch and disregard already obtained knowledge, which is +highly ineffective compared to human intelligence. Another issue arises from +the evergrowing, dense networks that requires more and more resources. +In this section, we will review the Pathways proposal (Dean, 2021) and promis- +ing techniques to address these issues. Overcoming these problems would +be especially beneficial for multipurpose models. Reusability of knowledge +is crucial for the multitask perspective, and improving the performance of +potentially billion-parameter-sized models will also have a significant positive +impact. + +4.3 Multipurpose Models +219 +FIGURE 4.18: Concept of Pathways. Different tasks follow different paths +to different expert models. From Dean (2021), Screenshot August 31th 2022. +4.3.3.1 +Pathways Proposal +Pathways (Dean, 2021) follows a different idea than previously seen methods. +The model consists of a large graph through which data can be forward passed. +The nodes of the network are neural networks themselves. A pass through this +network does not include passing all nodes and thus not all neural networks, +but only a few. The pass follows a specific path from one entry to the network’s +exit. The underlying idea behind this is similar to the mixture-of-expert models +described previously. Only the specific networks dedicated to solving a problem +are to be activated during inference. +At this point, it is necessary to recall that multitask learning aims to generalize +better on new tasks since the knowledge about previously learned tasks can be +applied. This idea is the foundation of Pathways too, where specialist networks +(nodes) are combined in a larger network. It is assumed that the model’s +generalization capabilities increase significantly by finding an appropriate path +for a task to the appropriate expert nodes. In this setup, the particular task- +specific problem-solving capabilities are combined. Furthermore, multimodality +is also considered as a potential extension. Adding more modalities might not +be a difficult problem considering the architecture of the previously introduced +transformer-based models. Overall the approach of a sparse model combining +multiple experts offers many opportunities to combine modalities and reuse + +Output +Output +Output +Output +Task +1 +Task +2 +Task +3 +Task N220 +4 Further Topics +task-specific capabilities. The sparsity of the model offers decreased inference +time since only few parts of the networks are activated during inference. +Another aspect of the Pathways proposal includes the improvement of current +hardware limitations. It is already observable that Moore’s Law (each n years, +the compute capacity doubles) has been slowing down substantially, while deep +learning research has grown exponentially in the late 2010s (Dean, 2020). Thus, +hardware also needs to be adapted to the growing demand in deep learning. +In the context of the pathway proposal, a novel framework for Google data +centers has been introduced, aiming to reduce overhead during computation +and access specific parts of the model to utilize the technical advantages of +sparse networks. As opposed to dense models where a whole model must be +accessed, with sparse networks it is not necessary to use the whole network but +only chunks of it. So far, two large pre-trained models have been introduced +based on the new training framework. One is the Pathways Language Model +(PaLM) [Chowdhery2022], which is currently the largest language model using +540 billion parameters, Minerva (Lewkowycz et al., 2022). Minerva is based on +PaLM, and Parti (Yu et al., 2022a), +4.3.3.2 +PathNet +An earlier approach for a sparse multitask network, which looks deceptively +similar, is PathNet (Fernando et al., 2017). PathNet is a training concept that +reuses knowledge from a previously learned task without the risk of catastrophic +forgetting (knowledge is overwritten), thus using solely the positive aspects +of multitask learning. The objective of PathNet consists of a evolutionary +algorithm (EA). +Neural networks are often depicted as a graph in which the input is directed +to all nodes in hidden layers, and their output is again passed to all nodes in +the next hidden layer or an output layer. In the case of PathNet, each node +is itself a neural network. The training algorithm finds the best paths for a +specific task through the network. +At first random paths through the network are initialized, then the paths +are trained for T epochs. After training, the paths are evaluated against +each other. The winning path overwrites the losing path. However, to achieve +exploration, the overwritten path is mutated by randomly including neighbors +of the winning path. Until a specific criterion to stop (e.g., number of epochs) +is reached, the current paths are frozen so that no more modifications to the +parameters of the networks on this path are possible. All other parameters are +newly initialized again. Also, a different, task-specific head is initialized. The +same procedure is now done again for the next task. Then, the main difference +is that the previously obtained path, including the trained networks, is frozen +during training so that the model can transfer knowledge from the previous +task to the new task. The model then finds appropriate paths throughout the +network until the stopping criterion is met again. + +4.3 Multipurpose Models +221 +PathNet was evaluated on supervised learning tasks and RL scenarios. Learning +from scratch and fine-tuning a PathNet, were chosen as a baseline. For fine- +tuning, the first path was chosen as a base model that was fine-tuned on the +second task. Overall, PathNet improved training time and prediction quality +for the second task compared to standard fine-tuning and learning from scratch. +PathNet has shown that different tasks can reuse the knowledge from training +on previous tasks without suffering from catastrophic forgetting. +FIGURE 4.19: Training PathNet on two tasks. At first random paths are +initialized (1), then trained (2-3) and fixed (4). The same procedure is repeated +for the next paths using the previously fixed paths and new parameters in all +other nodes (5-9). From Fernando et al. (2017). +4.3.3.3 +LIMoE +LIMoE (Multimodal Contrastive Learning with LIMoE: the Language-Image +Mixture of Experts) (Mustafa et al., 2022) combines text and vision input +using a MoE-enhanced transformer encoder. +While previous methods used two models (two-tower) to encode modalities, +LIMoE is solely based on one model, where the modalities are processed in +a single modified transformer-model (one-tower). The text data is encoded +using One-Hot-SentencePiece (Kudo and Richardson, 2018) encoding, while +images are tokenized in the same way as in ViT (Dosovitskiy et al., 2020a) +(elaborated further in the previous chapter) to provide the input appropriately. +The main difference to the standard transformer is an MoE layer where the +feed-forward network usually lies. In this layer, E experts are used, which are +themselves feed-forward-networks. For each token,K appropriate experts will +map the tokens further downstream. The routing is computed by a gating net +network, which decides which K experts are called. Another feature here is a + +100 +Pong +Alien +20 +8e7 steps +16e7 steps222 +4 Further Topics +fixed-length buffer for each expert in the MoE layer. This buffer is used to store +tokens before an expert network processes them, assuming that the allocations +of tokens for each expert are balanced. If it is impossible to buffer tokens +for the experts, the tokens will be dropped. To process the more important +tokens first, Batch Priority Routing (Riquelme et al., 2021) is used to provide +a ranking mechanism. The output of the transformer encoder is then average +pooled and subsequently multiplied with a modality-specific weight matrix, +which produces the eventual output for the token of both modalities. +FIGURE 4.20: Architecture of LIMoE. From Mustafa et al. (2022). + +Zt +X +X +X +X +AvgPool +Contrastive +MoE Encoder Layer +DenseEncoderLayer +MLP +MLP +MLP +MLP +MLP +3 +N +: MoE +Encoder +Router +: Layer +SelfAttn +DenseEncoderLayer +Lookup embed +Patch embed +Text +Images4.3 Multipurpose Models +223 +The model is trained using a contrastive objective. In this case, the contrastive +loss aims to maximize the paired visual and textual input while minimizing +all combinations of unpaired embeddings. This objective can be achieved by +using the dot-product as a similarity measure between the embeddings of both +modalities, which provide a differentiable operation through which the overall +loss can be minimized. +Additionally, the pitfalls of a multimodal MoE are also considered. One chal- +lenge in MoE is the correct balancing of routing to the experts, which is even +more challenging when using unbalanced multimodal data. To address this +issue, two new losses based on entropy are introduced. Entropy can be used as +an appropriate term since it provides a valuable number for the uniformity of +the distribution, which is necessary to balance the expert assignments. The +losses are aimed at controlling the allocation of experts to tokens, which is +also necessary to fulfill the assumptions for the implemented buffer. One loss +considers the token level (local) routing distribution, and the other considers +the overall expert routing distribution (global). The local loss aims to achieve +no uniform behavior in expert allocation such that each token is indeed as- +signed to specific experts. In contrast, the overall global loss aims to achieve +uniformity over all tokens to avoid a collapse in which tokens are solely assigned +to a few experts which do not have the capacity to deal with all tokens. These +losses are computed for each modality. Furthermore, already available losses +for training MoE models were also added to avoid known downsides of MoE +models. +LIMoE was compared against similar models like CLIP (Radford et al., 2021a). +The test dataset was ImageNet (Deng et al., 2009) and COCO (Lin et al., +2014c). Overall, LIMoE-H/14 (largest model, 12 MoE-layers, 32 experts per +layer) achieved strong performance considering that only one model was used +for two modalities against specialist models in two-tower setups. It was also +possible to outperform CLIP by a significant margin while using minimal +additional parameters. Models that achieved similar results to LIMoE used at +least twice the number of parameters for a forward pass. +LIMoE provides an example that an MoE-based model achieves impressive +results in a multimodal model. Current language and vision encoding techniques +are combined and married with the upsides of the MoE-architecture, leading to +a single model that can outperform current state-of-the-art models like CLIP. +4.3.3.4 +muNet (Multitask Network) +muNet (Gesmundo and Dean, 2022) is an architecture that maximizes the +reusability of previously learned knowledge by using an evoluationary algorithm +to evolve a new model. The authors address the current practice for fine-tuning, +where a pre-trained model is copied and then explicitly trained on a task by +overwriting previous knowledge. + +224 +4 Further Topics +An initial model is evolved by using an evoluationary algorithm to fit specific +tasks, while keeping the previously learned knowledge. Eventually, a set of +models is obtained, which includes new neural networks, based majorly on +the parameters of the initial model. The new modules can be seen as paths to +task-specific modifications of the initial network. +The EA of muNet starts with an initially proposed model that is mutated +further on. All further mutations are stored so that after a set of candidates +is available, the set can be split into models trained for this task (active +population) and models for other tasks (inactive population). These two sets +become the sets of candidates for the following task-specific iterations. Train- +ing a specific task follows three steps: Sampling candidate models, mutating, +training, and evaluation. The best scoring model is added to the active pop- +ulation for further mutation. A sampling algorithm accounts for exploration +and exploitation to get a candidate model for subsequent mutation. The active +population is ordered in a descending list based on the model’s score. Each +list entry is then revisited, starting from the highest scoring model onward, +so that the better performing models are considered first (exploitation). The +draw probability is computed as: +P(m|t) = 0.5#timesSelected(m,t) +Where #timesSelected(m, t) is the amount of previous mutations based on +model m for task t). The more unsuccessful mutations the model has had before, +the smaller the draw probability becomes. Thus, exploration is emphasized +by considering previous attempts and allowing other models to be preferred +as well. However, if this method does not yield a candidate, a model is drawn +from the union of the inactive and active population. Applying mutations is +the next step in the algorithm. A random number of mutations are drawn +from the set of possible mutations, which include: +• Layer Cloning: A layer is cloned for training. The layer’s parameters are +copied from the parent model so that training can continue using the same +knowledge. The other layers are still used but are not updated. Additionally, +the task-specific head layer is cloned to account for the underlying changes. +In case of training on a new task, the head is also newly initialized. +• Layer Insertion: Two layers are added to the model as residual adapters +((Rebuffi et al., 2017), [Houlsby2019@]). The second layer is zero-initialized +to keep an identity function so that training can continue from the state +before mutation. +• Layer Removal is used to skip layers while still using all other layers of +the parent model in a frozen state. +• Hyperparameter Change: samples hyperparameters close to the ones of +the parent model. A list of neighboring values is constructed from which a +parameter is drawn. +Subsequently, the models are trained on the task and scored. If the mutated + +4.3 Multipurpose Models +225 +model is better than the parent model, it is also added to the task’s set of +active models. This routine is done for all tasks iteratively and can be repeated +several times. Ultimately, only the best scoring models are kept for each task, +yielding a list of models each fit to a particular task. muNet was evaluated +for fine-tuning against a ViT instance, which was aimed at being the most +generalizable one (Steiner et al., 2021). The evaluation benchmarks consisted +of multiple classification problems (to simulate multitasking). ViT was fine- +tuned on all of these tasks as a baseline. In contrast, another ViT was evolved +using muNet, on which the baseline model was evaluated again. The approach +using muNet outperformed the fine-tuned ViT while using significantly fewer +parameters. +muNet offers a simple, evolutionary-based approach for fine-tuning and keeping +all previously acquired knowledge safe, thus maximizing reusability. +4.3.4 +Conclusion Pathways +The introduced models show promising novel features that might improve +multipurpose models. However, these models can only be improved if research +is done to combine the distinct concepts. PathNet and muNet offer novel +approaches to leverage already acquired knowledge, while LIMoE improves +handling different modalities in a single, sparse model. Furthermore, it also +becomes necessary to conduct research into scaling these concepts up. Since +the multitask-related models (PathNet and muNet) only included a few tasks, +introducing more tasks for training and testing might offer insights into how +transfer between tasks succeeds and fails. +LIMoE offers a promising architecture with respect to performance. Due to the +sparsity of the MoE-layer, LIMoE is faster, while it also outperforms previous +dense models. Using MoE-layers in transformers might also be a viable path +for models like OFA and Gato. Combining the flexible encoding techniques of +these models with the relative sparsity of LIMoE might result in even more +capable and efficient models. We, therefore, recommend further research in +this direction. +Another potential path for future research is intelligent routing for evolving +methods like muNet and PathNet. Evolutionary models offer a promising +approach to leveraging previous knowledge. However, the resulting models +are tailored to a particular task. Novel routing techniques to send data to +dedicated expert nodes in a complex network of models might help models +generalize, as was outlined in the Pathways proposal. +4.3.5 +Discussion +We reviewed multipurpose models that have become capable of solving multiple +tasks from different modalities. The transformer architecture also boosted + +226 +4 Further Topics +the development in this field, in which three of the four presented models +were transformer-based and from recent years. Multipurpose models offers +an opportunity to use one model instead of many different expert-models. +Furthermore, some multipurpose models (Gato, OFA) also outperformed expert- +models. However, Gato also showed inferior performance on ATARI Boxing +compared to competing models, indicating that research is still required to +explore the relationship between tasks. We also presented promising novel +architectures that alleviate or may solve problems in current multipurpose +models. However, further issues remain that have not been solved by research +to this day: +• A pitfall of models of these sizes is the low accessibility. Researchers need to +access the model through an API since running these models on a few GPUs +will likely be infeasible. It might be unlikely to see a BERT-like engagement +with the community of researchers if the access to models remains limited. +On the contrary, more open-source collaborations, as seen with EleutherAI +or Huggingface, might evolve as well as a countermovement and techniques +like distillation (Hinton et al., 2015a) might become more critical. +• Another issue with multipurpose models is the lack of metrics. Current +metrics are not suited for multitask and multimodal models. Evaluation +might also become harder since many different modalities can be used, as +seen here with the robotics property of Gato, which was not used in any of +the other reviewed models. +• Eventually, it is also necessary to consider the societal impact. The bias +problem will also become an issue in multipurpose models, especially since +multiple datasets must be considered. +• Also, the environmental impact of training large models needs to be con- +sidered since it is likely that larger models will yield better performance +according to scaling laws (Reed et al., 2022) but will also have a larger carbon +footprint. +4.4 +Generative Art +Author: Nadja Sauter +Supervisor: Jann Goschenhofer +As we have seen in subsection 3.2, computers can create images only based +on text prompts via multimodal deep learning. This capability is also used in +digital arts in the field of ‘generative art’ or also known as ‘computer art’. The +new movement comprises all artwork where the human artist cedes control to +an autonomous system (Galanter, 2016). In this way everyone, even artistically + +4.4 Generative Art +227 +FIGURE 4.21: LMU logo in style of Van Gogh’s Sunflower painting +untrained people, can easily create pictures as the computer takes over the +image generation. In some way, the computer becomes the artist with some +sort of creativity, a distinct human ability. In this chapter, we want to give +an overview about how computers improved over time in generating images +and how this is used in the contemporary arts scene. For instance in Figure +4.21 we used the seal of the Ludwig Maximilians University and changed the +style to Van Gogh’s Sunflower painting by the Neural Stlye Transfer Algorithm +and the method CLIP + VQGAN which fuses the logo with sunflowers in a +Van-Gogh-style way. +4.4.1 +Historical Overview +The first attempt to use AI to generate pictures was made by the engineer +Alexander Mordvintsev (2015) and his “DeepDream” Software. He used Con- +volution Neural Networks to generate very interesting and abstract images +based on the activation of a layer, visualizing the patterns learned by a neural +network. Below you can see a picture of a Labrador after it was processed by +the DeepDream algorithm. +In the following year, Gatys et al. (2016) investigated methods to transfer the +style of pictures. This method was used to transfer the style of Van Gogh’s +Sunflower painting to the LMU seal at the beginning of this chapter (see Figure +4.21). Besides, below in Figure 4.23 you can see the same Labrador picture +from Figure 4.22 in Kandinsky style. +Furthermore, the architecture of Generative Adversarial Networks (GANs), +which was first introduced by Goodfellow et al. (2014a), was used by another +research group Karras et al. (2019) to create very realistic fake images with +their architecture StyleGAN. For instance, one can create pictures of people +who do not exist, but look totally realistic (see Figure 4.24). +Nevertheless, it was almost impossible to control the exact output of these +early forms of AI art. There was no option to make specifications of how the + +LMU logo +Neural Style Transfer +CLIP + VQGAN228 +4 Further Topics +FIGURE 4.22: Picture of a Labrador processed by DeepDream (Google +Colab) +FIGURE 4.23: Picture of a Labrador with Kandinsky style (Google Colab) +FIGURE 4.24: Fake face generated by StyleGAN + +4.4 Generative Art +229 +result should look like in detail. For instance, you always get a human face +with the earlier mentioned StyleGAN application, but you cannot specify to +generate a blond girl with green eyes. This can be achieved by applying the +artist-critic paradigm (Soderlund and Blair, 2018): Thereby, the computer as +an artist generates a picture based on what the Neural Network learned in +the training phase (e.g. StyleGAN learns to generate pictures of human faces). +Additionally, a critic is used to tell the computer if the output satisfies the +concrete idea of the human artist. For this reason multimodal deep learning +models emerged in the field of generative art. Here, one can control the output +with the help of text prompting. In this way one can check if the generated +picture matches the initial text description. Looking at the previous StyleGAN +example, the multimodal architecture supervises whether the output picture is +indeed a blond girl with green eyes or not. A new class of models for generating +pictures evolved. +This idea was used by OpenAI for their models DALL-E (Ramesh et al., +2021a) and CLIP (Radford et al., 2021b) which were released in January 2021. +Both architectures are critics for multimodal models. Only a few days after +the release, Ryan Murdock combined CLIP (critic) with the already existing +Neural Net “BigGAN” (artist) in his “The Big Sleep” software. Furthermore, +Patashnik et al. (2021) developed StyleCLIP, a combination of StyleGAN +(artist) and CLIP (critic) to edit parts of images via text instructions. In +the following months, Katherine Crowson combined CLIP as critic with the +existing VQGAN algorithm as an artist. She also hooked up CLIP with guided +diffusion models as artists to yield more fine-grained results. This approach was +further investigated by OpenAI that published a paper (Dhariwal and Nichol, +2021) in May 2021 about guided diffusion models. Moreover, in December +2021 they introduced GLIDE (Nichol et al., 2021a), a model with CLIP or +classifier-free guidance as critics and diffusion models as artists. For more +technical details about text2img methods like DALL-E and GLIDE refer to +subsection 3.2 or for text supporting CV models like CLIP at subsection 3.4. +4.4.2 +How to use these models? +A lot of different notebooks are publicly available to apply the different pre- +trained models. In general, all notebooks work pretty similar: one only needs to +enter a text prompt in the code and after running the notebook the computer +generates a picture based on these instructions. It is relatively easy and no +prior coding knowledge is required. Moreover, there are also some API and +GUI applications (e.g. MindsEye beta) where no programming knowledge is +needed at all. Using these models, it is important to think about how exactly +once enters the respective text prompt. One can influence the output in a +desired way with little changes in the short text instruction. This is also known +as “prompt engineering”. For instance, in the beginning of this chapter, we +entered the prompt “in the style of Van Gogh” to change the style of the LMU + +230 +4 Further Topics +seal. In this context, a special trick is to append “unreal engine” (Aran, 2021) +which makes the resulting pictures more realistic with higher quality. This +seems surprising at first, but the models were trained on data from the internet +including pictures of the software company Epic Games that has a popular +3D video game engine called “Unreal Engine”. This is one of the most popular +prompting tricks. +Unfortunately, OpenAI has never released DALL-E. There is only an open- +source version called ruDALL-E (Shonenkov, 2021) that was trained on Russian +language data. Besides, hugging face hosts DALL-E mini (Boris, 2022) where +one can generate pictures, but does not have access to the model itself. PyTorch +offers a replication of the DALL-E code (OpenAI, 2021) but no trained model. +Furthermore, CLIP was released without publishing the used training data. +However, there exists an open source data set with CLIP embeddings called +LAION-400m (Schuhmann et al., 2021b). In the following, we used different +publicly available notebooks to try out the different models CLIP + BigGAN, +CLIP + VQGAN, CLIP + Guided Diffusion, GLIDE with the text prompt “a +fall landscape with a small cottage next to a lake” (see Figure 4.25) and “panda +mad scientist mixing sparkling chemicals, artstation” (see Figure 4.26). The +first prompt shows pretty realistic results, whereas the second prompt results +in more different and “crazy” outputs. That is because the panda-prompt +is more abstract than the first one and hence more difficult to illustrate. In +addition, some of the notebooks run on lower resolution due to computational +limitations. Besides, GLIDE is also downsized by the publisher: The released +smaller model consists of 300 million parameters, whereas the unreleased model +has about 3.5 billion parameters (Nichol et al., 2021a). So better results are +possible with higher computational power and other implementations of the +models. +FIGURE 4.25: Comparison of different models with prompt “fall landscape +with a small cottage next to a lake” +4.4.3 +Different tasks and modalities +So far, we concentrated on the two modalities text and image. Combining both +of them, one can tackle different tasks with the models mentioned above. The +main usage is to generate images based on a text prompt. Therefore, one can +start from noise or but is also possible to chose a real image as starting point +(Qiao et al., 2022). This was done in the beginning with the LMU seal by CLIP + +4.4 Generative Art +231 +FIGURE 4.26: Comparison of different models with prompt “panda mad +scientist mixing sparkling chemicals, artstation” ++ VQGAN (see Figure 4.21): instead of starting from noise, the model started +from the LMU seal as initialization and then used the prompt “in style of Van +Gogh”. The video captures how the model develops during fitting. In the end, +the typical Van Gogh sunflowers emerge as well as what could be a part of +Van Gogh’s face. +Furthermore, one can edit, extend, crop and search images with models like +GLIDE (Nichol et al., 2021a). For instance, Nichol et al. (2021a) fine-tuend +the model for text-conditional image inpainting (see figure 4.27). By marking +some area in the pictures, here in green, and adding a text prompt, one can +edit pictures very easily and precisely. This is quite impressive as the model +needs to understand from the text prompt which object should be filled in +and then do this in the correct style of the surrounding to produce a realistic +outcome. Another idea is to use a sketch of a drawing and let the model fill +in the details based on a text caption (see figure 4.28 below). This allows +controlled changes of parts of pictures with relatively little effort. In this way, +GLIDE can be used to generate pictures out of random noise, but also to edit +pictures in a specific way. Furthermore, it is also possible to combine other +modalities as well (see more details in subsection 4.1). For instance, WZRD +(2020) accompanies custom videos with suitable audio. It is even imaginable +to create sculptures with 3D-printers (Mccormack and Gambardella, 2022). +FIGURE 4.27: Text-conditional image inpainting examples with GLIDE +(Nichol et al., 2021a) + +an old car in a snowy forest" +“a man wearing a white hat"232 +4 Further Topics +FIGURE 4.28: Text-conditional edit from user scratch with GLIDE (Nichol +et al., 2021a) +4.4.4 +Discussion and prospects +In the last years, methods to generate images via text prompting improved +tremendously and a new field of art arised. It is surprising how these models +are able to create images only based on a short text instruction. This is quite +impressive as AI achieved some level of creativity. It is up for discussion to +which extent the computer is becoming the artist in generative arts and in this +way replacing the human artist. However, there is still no direct loss function +that can calculate how aesthetically pleasing a picture is (Esser et al., 2020). +This is probably also quite subjective and cannot be answered for everyone +in the same way. Most of the time the computer works as aid for the creative +process by generating multiple images. Then, the human artist can pick the +best outcome or vary the text prompt to improve the output in a desired +way. However, the better the AI becomes, the less the human artist needs to +intervene in this process. +Furthermore, as the output becomes more and more realistic, there is the +risk that these methods are abused to facilitate plagiarism or create fake +content and spread misleading information (Dehouche, 2021). After all, the +outputs look totally realistic, but are completely made-up and generated by +the computer. For this reason, some organisations like Open-AI do not release +all their models (e.g. DALL-E) or downstream models (e.g. CLIP). On the +other hand, from a scientific point of view, it is important to get access to such +models to continue research. +Moreover, similarly to most Deep Learning algorithms, these models are affected +by biases in the input data (Srinivasan and Uchino, 2021). For instance, Esser +et al. (2020) points out that CLIP text embeddings associate a human being +more with a man than with a woman. In this way it might be more likely +that our models generate a man with the text prompt “human being” than a +woman. This effect needs to be further investigated and should be removed. + +"a corgi wearing a bow tie and a birthday hat"4.4 Generative Art +233 +After all, generative arts can be used to create Non Fungible Tokens (NFT) +relatively easily. NFTs are digital artworks where a special digital signature +is added making them unique and in this way non-fungible (Wang et al., +2021). The digital artwork is bought and sold online, often by means of +cryptocurrency. That is why this field is also called Cryptoart. This provides +the perfect platform to sell generative arts. However, this trading market is +quite new and controversial, similar to crypotcurrency trading in general. +In conclusion, generative arts is a new and impressive field. It combines +technology with arts, two rather opposite fields. The methods are already +really impressive and are still getting better and better. For instance, this year +Open AI already published DALLE-2 (Ramesh et al., 2022a) that outperforms +DALLE-1. It remains highly interesting to follow up with the developments in +this field. + + +5 +Conclusion +Author: Nadja Sauter +Supervisor: Matthias Aßenmacher +It is very impressive how multimodal architectures have developed, especially +over the course of the last two years. Particularly, methods to generate pictures +based on text prompts, like DALL-E, became incredibly good at their “job”. +A lot of people are fascinated by the stunning results and a huge hype about +these AI generated images evolved in the internet, especially on twitter. In +this way, the models were not only investigated by researchers but also by the +online community (e.g. Katherine Crowson alias Rivers Have Wings). Even in +the art scene these methods attracted a lot of attention as shown in our use +case “enerative Arts” (subsection 4.4). Apart from that, it is possible to deploy +these methods commercially, for instance in the film production or gaming +industry (e.g. creating characters for games). However, this might also result +in problems of copyright, an issue which has not yet been dealt with until now. +It is also impressive how realistic and precise outputs are achieved by such +architectures. On the other hand, these methods can also be abused to spread +misleading information as it is often very difficult to distinguish between a +fake or a real picture by only looking at it. This can be systematically used to +manipulate the public opinion by spreading AI manipulated media, also called +deep fakes. That’s why researchers like Joshi et al. (2021) demand automated +tools which are capable of detecting these fabrications. Apart from that, like +most deep learning models, multimodal architectures are not free from bias +which also needs to be investigated further (Esser et al., 2020). Besides, the +algorithms are very complex which is why they are often called “black-box” +models, meaning that one cannot directly retrace how the model came to a +certain solution or decision. This may limit their social acceptance and usability +as the underlying process is not credible and transparent enough (Joshi et al., +2021). For instance, in medical applications like e.g. predicting the presence +or absence of cancer, apart from the decision of the AI the reasoning and the +certainty are highly relevant for doctors and patients. +Furthermore, there is a clear trend in recent years to build more and more +complex architectures in order to achieve higher performance. For instance +OpenAI’s language model GPT-2 has had about 1.5 billion parameters (Radford +et al., 2019a), whereas its successor GPT-3 had about 175 billion parameters +235 + +236 +5 Conclusion +(Brown et al., 2020). Increasing the number of parameters often helps improving +model performance, but all of these parameters need to be trained and stored +which takes a lot of time, enormous computational power and storage. For +example, training GPT-2 took about one week (168 hours) of training time on +32 TPUv3 chips (Strubell et al., 2019c). The researchers (Strubell et al. (2019c)) +estimated that the cloud compute costs for training GPT-2 added up to about +$12,902–$43,008. Apart from the enormous expenses, this also contributes +to our environmental burden as this process is really energy intensive. Due +to missing power draw data on GPT-2’s training hardware, the researchers +weren’t able to calculate the CO2 emission. However, for the popular BERT +architecture with 110M parameters they calculated cloud compute costs of +$3,751-$12,571, energy consumption of 1,507 kWh and a Carbon footprint of +1,438 lbs of CO2. In comparison, the footprint of flying from New York to San +Francisco by plane for one passenger is about 1,984 lbs of CO2. In conclusion +training BERT once results in almost the same footprint as this long-haul +flight. On top of this, these numbers are only for one training run. Developing +a new model or adapting it often takes several fitting and tuning phases. +Moreover, the computational power as well as the necessary hardware, technol- +ogy and financial means to run these models can oftentimes only be provided +by big technology companies like e.g. Google, Facebook or OpenAI. This +results in a disparate access between researchers in academia versus industry. +Furthermore, the companies sometimes tend do not publishing their (best) +models as they are their “product” and contribute to the company’s intellectual +property. In this way it is not possible to reproduce their work and findings +independently. Besides, from an economic point of view, this may be the +foundation of a monopoly which might be dangerous for economic competition +and holds the possibility of abuse. + +6 +Epilogue +Author: Matthias Aßenmacher +Since this project was realized in a limited time frame and accounted for about +one third of the ECTS points which should be achieved during one semester, +it is obvious that this booklet cannot provide exhaustive coverage of the vast +research field of Multimodal Deep Learning. +Furthermore this area of research is moving very rapidly at the moment, which +means that certain architectures, improvements or ideas had net yet even been +published when we sat down and came up with the chapter topics in February +2022. Yet, as you might have seen, in some cases the students were even able to +incorporate ongoing research published over the course of the seminar. Thus, +this epilogue tries to put the content of this booklet into context and relate it +to what is currently happening. Thereby we will focus on two aspects: +• New influential (or even state-of-the-art) architectures +• Extending existing architectures to videos (instead of “only” images) +6.1 +New influential architectures +In Chapter 3.2: “Text2Image” and Chapter 4.4: “Generative Art some +important models for generating images/art from free-text prompts have been +presented. However, one example of an even better (at least perceived this +way by many people) generative model was just published by researchers from +Björn Ommer’s group at LMU: “High-Resolution Image Synthesis with Latent +Diffusion Models” +They introduced a model called Stable Diffusion which allows users to generate +photorealisitic images. Further (as opposed to numerous other architectures, it +is available open-source and can even be tried out via huggingface. +237 + +238 +6 Epilogue +6.2 +Creating videos +Also more recently, research has focussed on not only creating images from +natural language input but also videos. The Imagen architecture, which was +developed by researchers at Google Research (Brain Team), was extended with +respect to also creating videos (see their project homepage). Yet, this is only +on of many possible examples of research being conducted in this direction. +The interested reader is refered to the paper accompanying their project. +We hope that this little outlook can adequately round off this nice piece of +academic work created by extremely motivated students and we hope that you +enjoyed reading. + +7 +Acknowledgements +The most important contributions are from the students themselves. The +success of such projects highly depends on the students. And this book is a +success, so thanks a lot to all the authors! The other important role is the +supervisor. Thanks to all the supervisors who participated! Special thanks to +Christian Heumann and Bernd Bischl who enabled us to conduct the seminar +in such an experimental way, supported us and gave valuable feedback for the +seminar structure. Thanks a lot as well to the entire Department of Statistics +and the LMU Munich for the infrastructure. +The authors of this work take full responsibilities for its content. +239 + + +Bibliography +(2022). Neural Networks - History. [Online; accessed 2022-06-29]. +Agirre, E., Alfonseca, E., Hall, K., Kravalova, J., Pasca, M., and Soroa, A. +(2009). +A study on similarity and relatedness using distributional and +wordnet-based approaches. +Ailem, M., Zhang, B., Bellet, A., Denis, P., and Sha, F. (2018). A probabilistic +model for joint learning of word embeddings from texts and images. +Akbari, H., Yuan, L., Qian, R., Chuang, W., Chang, S., Cui, Y., and Gong, B. +(2021). VATT: transformers for multimodal self-supervised learning from +raw video, audio and text. In Ranzato, M., Beygelzimer, A., Dauphin, Y. N., +Liang, P., and Vaughan, J. W., editors, Advances in Neural Information +Processing Systems 34: Annual Conference on Neural Information Processing +Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 24206– +24221. +Alayrac, J.-B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., Lenc, K., +Mensch, A., Millican, K., Reynolds, M., et al. (2022). Flamingo: a visual +language model for few-shot learning. +Alford, A. (2021). Google announces 800m parameter vision-language ai model +align. +Anderson, P., Fernando, B., Johnson, M., and Gould, S. (2016). Spice: Semantic +propositional image caption evaluation. In Leibe, B., Matas, J., Sebe, N., +and Welling, M., editors, Computer Vision – ECCV 2016, pages 382–398. +Springer International Publishing. +Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S., and +Zhang, L. (2018). Bottom-up and top-down attention for image captioning +and visual question answering. In 2018 IEEE/CVF Conference on Computer +Vision and Pattern Recognition, pages 6077–6086. +Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C. L., and +Parikh, D. (2015). Vqa: Visual question answering. In Proceedings of the +IEEE international conference on computer vision, pages 2425–2433. +Aran, K. (2021). When you generate images with vqgan clip, the image quality +dramatically improves if you add "unreal engine" to your prompt. people +are now calling this "unreal engine trick". +241 + +242 +7 Bibliography +Bachmann, R., Mizrahi, D., Atanov, A., and Zamir, A. (2022). Multimae: +Multi-modal multi-task masked autoencoders. +Baevski, A., Hsu, W.-N., Xu, Q., Babu, A., Gu, J., and Auli, M. (2022). +Data2vec: A general framework for self-supervised learning in speech, vision +and language. +Baevski, A., Zhou, Y., Mohamed, A., and Auli, M. (2020). wav2vec 2.0: A +framework for self-supervised learning of speech representations. 33:12449– +12460. +Bahdanau, D., Cho, K., and Bengio, Y. (2014). Neural machine translation by +jointly learning to align and translate. +Bandy, J. and Vincent, N. (2021). Addressing" documentation debt" in machine +learning research: A retrospective datasheet for bookcorpus. +Banerjee, S. and Lavie, A. (2005). METEOR: An automatic metric for MT +evaluation with improved correlation with human judgments. In Proceedings +of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for +Machine Translation and/or Summarization, pages 65–72. Association for +Computational Linguistics. +Bao, H., Dong, L., and Wei, F. (2021). +Beit: Bert pre-training of image +transformers. +Barham, P., Chowdhery, A., Dean, J., Ghemawat, S., Hand, S., Hurt, D., +Isard, M., Lim, H., Pang, R., Roy, S., Saeta, B., Schuh, P., Sepassi, R., +Shafey, L. E., Thekkath, C. A., and Wu, Y. (2022). Pathways: Asynchronous +distributed dataflow for ml. +Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. (2013). +The +arcade learning environment: An evaluation platform for general agents. +47(1):253–279. +Bender, E. M., Gebru, T., McMillan-Major, A., and Shmitchell, S. (2021). +On the dangers of stochastic parrots: Can language models be too big? +In Proceedings of the 2021 ACM Conference on Fairness, Accountability, +and Transparency, FAccT ’21, pages 610–623. Association for Computing +Machinery. +Bengio, Y., Courville, A. C., and Vincent, P. (2013). Representation learning: +A review and new perspectives. 35(8):1798–1828. +Beyer, L., Hénaff, O. J., Kolesnikov, A., Zhai, X., and Oord, A. v. d. (2020). +Are we done with imagenet? +Birhane, A., Prabhu, V. U., and Kahembwe, E. (2021). Multimodal datasets: +misogyny, pornography, and malignant stereotypes. + +7.0 Bibliography +243 +Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. (2016). Enriching word +vectors with subword information. +Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., +Bernstein, M. S., Bohg, J., Bosselut, A., Brunskill, E., et al. (2021). On the +opportunities and risks of foundation models. +Bordes, P., Zablocki, E., Soulier, L., Piwowarski, B., and Gallinari, P. (2020). In- +corporating visual semantics into sentence representations within a grounded +space. +Boris, D. (2022). Dall·e mini. +Borji, A. (2018). Pros and cons of GAN evaluation measures. +Bosch, A., Zisserman, A., and Munoz, X. (2007). Image classification using +random forests and ferns. In 2007 IEEE 11th international conference on +computer vision, pages 1–8. Ieee. +Bowman, S. R. and Dahl, G. E. (2021). What will it take to fix benchmarking +in natural language understanding? +Bromley, J., Guyon, I., LeCun, Y., Säckinger, E., and Shah, R. (1993). Signature +verification using a" siamese" time delay neural network. 6. +Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., +Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. (2020). Language +models are few-shot learners. 33:1877–1901. +Bruni, E., Tran, N.-K., and Baroni, M. (2014). Multimodal distributional +semantics. 49:1–47. +Brysbaert, M., Warriner, A. B., and Kuperman, V. (2014). Concreteness ratings +for 40 thousand generally known english word lemmas. 46(3):904–911. +Bäck, T. and Schwefel, H.-P. (1993). An overview of evolutionary algorithms +for parameter optimization. 1(1):1–23. +Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., and Zagoruyko, +S. (2020). End-to-end object detection with transformers. +Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P., and Joulin, A. (2020). +Unsupervised learning of visual features by contrasting cluster assignments. +Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., and +Joulin, A. (2021). Emerging properties in self-supervised vision transformers. +In Proceedings of the IEEE/CVF International Conference on Computer +Vision, pages 9650–9660. +Carreira, J., Koppula, S., Zoran, D., Recasens, A., Ionescu, C., Henaff, O., +Shelhamer, E., Arandjelovic, R., Botvinick, M., Vinyals, O., Simonyan, K., +Zisserman, A., and Jaegle, A. (2022). Hierarchical perceiver. + +244 +7 Bibliography +Caruana, R. (1997). Multitask learning. Machine learning, 28(1):41–75. +Cheerla, A. and Gevaert, O. (2019). Deep learning with multimodal represen- +tation for pancancer prognosis prediction. 35(14):i446–i454. +Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. (2020a). +A simple +framework for contrastive learning of visual representations. +Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. E. (2020b). A simple +framework for contrastive learning of visual representations. In Proceedings +of the 37th International Conference on Machine Learning, ICML 2020, 13- +18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning +Research, pages 1597–1607. PMLR. +Chen, X., Xie, S., and He, K. (2021). An empirical study of training self- +supervised vision transformers. In Proceedings of the IEEE/CVF Interna- +tional Conference on Computer Vision, pages 9640–9649. +Cheng, H.-T., Koc, L., Harmsen, J., Shaked, T., Chandra, T., Aradhye, H., +Anderson, G., Corrado, G., Chai, W., Ispir, M., et al. (2016). Wide & deep +learning for recommender systems. In Proceedings of the 1st workshop on +deep learning for recommender systems, pages 7–10. +Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., +Schwenk, H., and Bengio, Y. (2014). Learning phrase representations using +rnn encoder-decoder for statistical machine translation. +Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., +Barham, P., Chung, H. W., Sutton, C., Gehrmann, S., et al. (2022). Palm: +Scaling language modeling with pathways. +Clark, K., Khandelwal, U., Levy, O., and Manning, C. D. (2019). What does +bert look at? an analysis of bert’s attention. +Collell, G., Zhang, T., and Moens, M.-F. (2017). Imagined visual representa- +tions as multimodal embeddings. In Proceedings of the AAAI Conference +on Artificial Intelligence, volume 31. +Cornia, M., Stefanini, M., Baraldi, L., and Cucchiara, R. (2019). Meshed- +memory transformer for image captioning. +Cornia, M., Stefanini, M., Baraldi, L., and Cucchiara, R. (2020). Meshed- +Memory Transformer for Image Captioning. In Proceedings of the IEEE/CVF +Conference on Computer Vision and Pattern Recognition. +Crawshaw, M. (2020). Multi-task learning with deep neural networks: A survey. +Crowson, K., Biderman, S., Kornis, D., Stander, D., Hallahan, E., Castricato, +L., and Raff, E. (2022). Vqgan-clip: Open domain image generation and +editing with natural language guidance. +Da, J. and Kasai, J. (2019). Cracking the contextual commonsense code: + +7.0 Bibliography +245 +Understanding commonsense reasoning aptitude of deep contextual repre- +sentations. +Das, A., Agrawal, H., Zitnick, L., Parikh, D., and Batra, D. (2017). Human +attention in visual question answering: Do humans and deep networks look +at the same regions? 163:90–100. +Dean, J. (2020). 1.1 the deep learning revolution and its implications for +computer architecture and chip design. In 2020 IEEE International Solid- +State Circuits Conference - (ISSCC), pages 8–14. +Dean, J. (2021). Introducing pathways: A next-generation ai architecture. +Dehouche, N. (2021). Plagiarism in the age of massive generative pre-trained +transformers (gpt-3). 21:17–23. +Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). +Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference +on computer vision and pattern recognition, pages 248–255. Ieee. +Devereux, B. J., Tyler, L. K., Geertzen, J., and Randall, B. (2014). The +centre for speech, language and the brain (cslb) concept property norms. +46(4):1119–1127. +Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018a). Bert: Pre- +training of deep bidirectional transformers for language understanding. +Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018b). Bert: Pre- +training of deep bidirectional transformers for language understanding. +Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018c). Bert: Pre- +training of deep bidirectional transformers for language understanding. +Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre- +training of deep bidirectional transformers for language understanding. pages +4171–4186. Association for Computational Linguistics. +Dhamala, J., Sun, T., Kumar, V., Krishna, S., Pruksachatkun, Y., Chang, +K.-W., and Gupta, R. (2021). Bold: Dataset and metrics for measuring +biases in open-ended language generation. In Proceedings of the 2021 ACM +Conference on Fairness, Accountability, and Transparency, pages 862–872. +Dhariwal, P. and Nichol, A. (2021). Diffusion models beat gans on image +synthesis. +Ding, M., Yang, Z., Hong, W., Zheng, W., Zhou, C., Yin, D., Lin, J., Zou, X., +Shao, Z., Yang, H., and Tang, J. (2021). Cogview: Mastering text-to-image +generation via transformers. +Doerr, B. and Neumann, F. (2021). A survey on recent progress in the theory +of evolutionary algorithms for discrete optimization. 1(4). + +246 +7 Bibliography +Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Un- +terthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. +(2020a). An image is worth 16x16 words: Transformers for image recognition +at scale. +Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Un- +terthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, +J., and Houlsby, N. (2020b). An image is worth 16x16 words: Transformers +for image recognition at scale. +Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Un- +terthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, +J., and Houlsby, N. (2020c). An image is worth 16x16 words: Transformers +for image recognition at scale. CoRR, abs/2010.11929. +Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Un- +terthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, +J., and Houlsby, N. (2021). An image is worth 16x16 words: Transformers +for image recognition at scale. In 9th International Conference on Learn- +ing Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. +OpenReview.net. +Dwibedi, D., Aytar, Y., Tompson, J., Sermanet, P., and Zisserman, A. (2021). +With a little help from my friends: Nearest-neighbor contrastive learning of +visual representations. CoRR, abs/2104.14548. +Education, I. C. (2020a). What is Supervised Learning? [Online; accessed +2022-06-29]. +Education, I. C. (2020b). What is Unsupervised Learning? [Online; accessed +2022-06-29]. +Esser, P., Rombach, R., and Ommer, B. (2020). A note on data biases in +generative models. +Esser, P., Rombach, R., and Ommer, B. (2021). Taming transformers for +high-resolution image synthesis. In Proceedings of the IEEE/CVF conference +on computer vision and pattern recognition, pages 12873–12883. +Ettinger, A. (2019). What bert is not: Lessons from a new suite of psycholin- +guistic diagnostics for language models. +Everingham, M., van Gool, L., Williams, C., Winn, J., and Zisserman, A. +(2010). The pascal visual object classes (voc) challenge. 88(2):303–338. +Fellbaum, C. (2010). +Wordnet. +In Theory and applications of ontology: +computer applications, pages 231–243. Springer. +Fellbaum, C. D. (2000). Wordnet : an electronic lexical database. 76:706. +Fernando, C., Banarse, D., Blundell, C., Zwols, Y., Ha, D., Rusu, A. A., Pritzel, + +7.0 Bibliography +247 +A., and Wierstra, D. (2017). Pathnet: Evolution channels gradient descent +in super neural networks. +Forbes, M., Holtzman, A., and Choi, Y. (2019). Do neural language represen- +tations learn physical commonsense? +Gafni, O., Polyak, A., Ashual, O., Sheynin, S., Parikh, D., and Taigman, Y. +(2022). Make-a-scene: Scene-based text-to-image generation with human +priors. +Galanter, P. (2016). Generative art theory. 1:631. +Gao, J., Li, Z., Nevatia, R., et al. (2017). Knowledge concentration: Learning +100k object classifiers in a single cnn. +Gao, L., Biderman, S., Black, S., Golding, L., Hoppe, T., Foster, C., Phang, J., +He, H., Thite, A., Nabeshima, N., et al. (2020). The pile: An 800gb dataset +of diverse text for language modeling. +Gatys, L. A., Ecker, A. S., and Bethge, M. (2016). A neural algorithm of +artistic style. +Gebru, T., Krause, J., Wang, Y., Chen, D., Deng, J., Aiden, E., and Fei-Fei, +L. (2017). Using deep learning and google street view to estimate the demo- +graphic makeup of neighborhoods across the united states. 114:201700035. +Gesmundo, A. and Dean, J. (2022). munet: Evolving pretrained deep neural +networks into scalable auto-tuning multitask systems. +Gokaslan, A. and Cohen, V. (2019). Openwebtext corpus. +Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, +S., Courville, A., and Bengio, Y. (2014a). Generative adversarial nets. In +Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., and Weinberger, +K., editors, Advances in Neural Information Processing Systems, volume 27. +Curran Associates, Inc. +Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., +Ozair, S., Courville, A., and Bengio, Y. (2014b). Generative adversarial +networks. +Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., +Ozair, S., Courville, A., and Bengio, Y. (2014c). Generative adversarial +networks. +Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014d). Explaining and harnessing +adversarial examples. +Google +(2022). +Embeddings: +Translating +to +a +lower-dimensional +space. +https://developers.google.com/machine-learning/crash- +course/embeddings/translating-to-a-lower-dimensional-space. + +248 +7 Bibliography +Goyal, Y., Khot, T., Summers-Stay, D., Batra, D., and Parikh, D. (2017). +Making the v in vqa matter: Elevating the role of image understanding +in visual question answering. In Proceedings of the IEEE conference on +computer vision and pattern recognition, pages 6904–6913. +Grill, J.-B., Strub, F., Altch’e, F., Tallec, C., Richemond, P. H., Buchatskaya, +E., Doersch, C., Pires, B. ., Guo, Z., Azar, M. G., Piot, B., Kavukcuoglu, +K., Munos, R., and Valko, M. (2020a). Bootstrap your own latent: A new +approach to self-supervised learning. +Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P. H., Buchatskaya, +E., Doersch, C., Pires, B. A., Guo, Z. D., Azar, M. G., Piot, B., Kavukcuoglu, +K., Munos, R., and Valko, M. (2020b). Bootstrap your own latent: A new +approach to self-supervised learning. +Guo, Y., Zhang, L., Hu, Y., He, X., and Gao, J. (2016). Ms-celeb-1m: A dataset +and benchmark for large-scale face recognition. In European conference on +computer vision, pages 87–102. Springer. +Harnad, S. (1990). The symbol grounding problem. 42(1-3):335–346. +Harris, Z. et al. (1954). Distributional hypothesis. 10(23):146–162. +Hart, B. and Risley, T. R. (1995). Meaningful differences in the everyday +experience of young american children. +He, K., Chen, X., Xie, S., Li, Y., Dollár, P., and Girshick, R. (2022). Masked +autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF +Conference on Computer Vision and Pattern Recognition, pages 16000–16009. +He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep residual learning for +image recognition. +Henderson, P., Hu, J., Romoff, J., Brunskill, E., Jurafsky, D., and Pineau, J. +(2020). Towards the systematic reporting of the energy and carbon footprints +of machine learning. 21(248):1–43. +Herdade, S., Kappeler, A., Boakye, K., and Soares, J. (2019). Image captioning: +Transforming objects into words. pages 11135–11145. +Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. +(2017). Gans trained by a two time-scale update rule converge to a local +nash equilibrium. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., +Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural +Information Processing Systems, volume 30. Curran Associates, Inc. +Hill, F. and Korhonen, A. (2014). Learning abstract concept embeddings from +multi-modal data: Since you probably can’t see what i mean. In Proceedings +of the 2014 Conference on Empirical Methods in Natural Language Processing +(EMNLP), pages 255–265. + +7.0 Bibliography +249 +Hill, F., Reichart, R., and Korhonen, A. (2015). +Simlex-999: Evaluating +semantic models with (genuine) similarity estimation. 41(4):665–695. +Hinton, G., Vinyals, O., and Dean, J. (2015a). Distilling the knowledge in a +neural network. +Hinton, G., Vinyals, O., Dean, J., et al. (2015b). Distilling the knowledge in a +neural network. 2(7). +Ho, J., Jain, A., and Abbeel, P. (2020a). Denoising diffusion probabilistic +models. +Ho, J., Jain, A., and Abbeel, P. (2020b). Denoising diffusion probabilistic +models. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, +H., editors, Advances in Neural Information Processing Systems 33: Annual +Conference on Neural Information Processing Systems 2020, NeurIPS 2020, +December 6-12, 2020, virtual. +Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. 9(8):1735– +1780. +Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, +E., Casas, D. d. L., Hendricks, L. A., Welbl, J., Clark, A., et al. (2022). +Training compute-optimal large language models. +Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., +Andreetto, M., and Adam, H. (2017). Mobilenets: Efficient convolutional +neural networks for mobile vision applications. CoRR, abs/1704.04861. +Hu, R. and Singh, A. (2021a). Unit: Multimodal multitask learning with +a unified transformer. +In Proceedings of the IEEE/CVF International +Conference on Computer Vision, pages 1439–1449. +Hu, R. and Singh, A. (2021b). Unit: Multimodal multitask learning with +a unified transformer. In 2021 IEEE/CVF International Conference on +Computer Vision (ICCV), pages 1419–1429. +Huang, L., Wang, W., Chen, J., and Wei, X.-Y. (2019). Attention on attention +for image captioning. pages 4633–4642. +Huang, S.-C., Pareek, A., Seyyedi, S., Banerjee, I., and Lungren, M. P. (2020). +Fusion of medical imaging and electronic health records using deep learning: +a systematic review and implementation guidelines. 3(1):1–9. +Huang, Y., Cheng, Y., Chen, D., Lee, H., Ngiam, J., Le, Q. V., and Chen, +Z. (2018). Gpipe: Efficient training of giant neural networks using pipeline +parallelism. CoRR, abs/1811.06965. +Hudson, D. A. and Manning, C. D. (2019). Gqa: A new dataset for real-world +visual reasoning and compositional question answering. In Proceedings of + +250 +7 Bibliography +the IEEE/CVF conference on computer vision and pattern recognition, pages +6700–6709. +IV, W. C. S., Kapoor, R., and Ghosh, P. (2021). Multimodal classification: +Current landscape, taxonomy and future directions. +Ive, J., Madhyastha, P., and Specia, L. (2019). Distilling translations with +visual awareness. +Jacobs, R. A., Jordan, M. I., Nowlan, S. J., and Hinton, G. E. (1991). Adaptive +mixtures of local experts. 3(1):79–87. +Jaegle, A., Gimeno, F., Brock, A., Vinyals, O., Zisserman, A., and Carreira, +J. (2021a). +Perceiver: General perception with iterative attention. +In +International conference on machine learning, pages 4651–4664. PMLR. +Jaegle, A., Gimeno, F., Brock, A., Vinyals, O., Zisserman, A., and Carreira, J. +(2021b). Perceiver: General perception with iterative attention. In Meila, M. +and Zhang, T., editors, Proceedings of the 38th International Conference on +Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 +of Proceedings of Machine Learning Research, pages 4651–4664. PMLR. +Jaspreet (2019). A Concise History of Neural Networks | by Jaspreet | Towards +Data Science. [Online; accessed 2022-06-29]. +Jean, N., Burke, M., Xie, M., Davis, W. M., Lobell, D. B., and Ermon, S. +(2016). Combining satellite imagery and machine learning to predict poverty. +353(6301):790–794. +Jia, C., Yang, Y., Xia, Y., Chen, Y., Parekh, Z., Pham, H., Le, Q. V., Sung, +Y., Li, Z., and Duerig, T. (2021a). Scaling up visual and vision-language +representation learning with noisy text supervision. +Jia, C., Yang, Y., Xia, Y., Chen, Y.-T., Parekh, Z., Pham, H., Le, Q., Sung, +Y.-H., Li, Z., and Duerig, T. (2021b). Scaling up visual and vision-language +representation learning with noisy text supervision. In International Con- +ference on Machine Learning, pages 4904–4916. PMLR. +Jordan, M. I. and Jacobs, R. A. (1994). Hierarchical mixtures of experts and +the em algorithm. 6(2):181–214. +Joseph, K. J., Khan, S. H., Khan, F. S., and Balasubramanian, V. N. (2021). +Towards open world object detection. CoRR, abs/2103.02603. +Joshi, G., Walambe, R., and Kotecha, K. (2021). A review on explainability +in multimodal deep neural nets. 9:59800–59821. +Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., +Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., Bridgland, A., +Meyer, C., Kohl, S. A. A., Ballard, A. J., Cowie, A., Romera-Paredes, B., +Nikolov, S., Jain, R., Adler, J., Back, T., Petersen, S., Reiman, D., Clancy, E., + +7.0 Bibliography +251 +Zielinski, M., Steinegger, M., Pacholska, M., Berghammer, T., Bodenstein, +S., Silver, D., Vinyals, O., Senior, A. W., Kavukcuoglu, K., Kohli, P., and +Hassabis, D. (2021). Highly accurate protein structure prediction with +AlphaFold. 596(7873):583–589. +Kahatapitiya, K. and Ryoo, M. S. (2021). Swat: Spatial structure within and +among tokens. +Kaiser, L., Gomez, A. N., Shazeer, N., Vaswani, A., Parmar, N., Jones, L., and +Uszkoreit, J. (2017). One model to learn them all. +Karpathy, A. and Fei-Fei, L. (2014). Deep visual-semantic alignments for +generating image descriptions. +Karras, T., Laine, S., and Aila, T. (2019). A style-based generator architecture +for generative adversarial networks. +In Proceedings of the IEEE/CVF +conference on computer vision and pattern recognition, pages 4401–4410. +Katzman, J. L., Shaham, U., Cloninger, A., Bates, J., Jiang, T., and Kluger, +Y. (2018). Deepsurv: personalized treatment recommender system using a +cox proportional hazards deep neural network. 18(1):1–12. +Kiela, D. and Bottou, L. (2014). Learning image embeddings using convolu- +tional neural networks for improved multi-modal semantics. In Proceedings +of the 2014 Conference on empirical methods in natural language processing +(EMNLP), pages 36–45. +Kiela, D., Conneau, A., Jabri, A., and Nickel, M. (2017). Learning visually +grounded sentence representations. +Kingma, D. P. and Welling, M. (2013). Auto-encoding variational bayes. +Kingma, D. P. and Welling, M. (2019). An introduction to variational autoen- +coders. +Kiros, J., Chan, W., and Hinton, G. (2018). Illustrative language understanding: +Large-scale visual grounding with image search. In Proceedings of the 56th +Annual Meeting of the Association for Computational Linguistics (Volume +1: Long Papers), pages 922–933. +Koehn, P. (2005). Europarl: A parallel corpus for statistical machine translation. +In Proceedings of machine translation summit x: papers, pages 79–86. +Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., and +Houlsby, N. (2019). Large scale learning of general visual representations +for transfer. 2(8). +Kopper, P., Wiegrebe, S., Bischl, B., Bender, A., and Rügamer, D. (2022). +Deeppamm: Deep piecewise exponential additive mixed models for com- +plex hazard structures in survival analysis. In Pacific-Asia Conference on +Knowledge Discovery and Data Mining, pages 249–261. Springer. + +252 +7 Bibliography +Kottur, S., Vedantam, R., Moura, J. M., and Parikh, D. (2016). +Visual +word2vec (vis-w2v): Learning visually grounded word embeddings using +abstract scenes. In Proceedings of the IEEE Conference on Computer Vision +and Pattern Recognition, pages 4985–4994. +Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, +S., Kalantidis, Y., Li, L.-J., Shamma, D. A., Bernstein, M., and Fei-Fei, L. +(2016). Visual genome: Connecting language and vision using crowdsourced +dense image annotations. +Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, +S., Kalantidis, Y., Li, L.-J., Shamma, D. A., et al. (2017). Visual genome: +Connecting language and vision using crowdsourced dense image annotations. +123(1):32–73. +Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple layers of features +from tiny images. +Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012a). Imagenet classification +with deep convolutional neural networks. In Pereira, F., Burges, C., Bottou, +L., and Weinberger, K., editors, Advances in Neural Information Processing +Systems, volume 25. Curran Associates, Inc. +Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012b). Imagenet classification +with deep convolutional neural networks. 25. +Kudo, T. and Richardson, J. (2018). SentencePiece: A simple and language +independent subword tokenizer and detokenizer for neural text processing. +In Proceedings of the 2018 Conference on Empirical Methods in Natural +Language Processing: System Demonstrations, pages 66–71. Association for +Computational Linguistics. +Kuznetsova, A., Rom, H., Alldrin, N., Uijlings, J., Krasin, I., Pont-Tuset, J., +Kamali, S., Popov, S., Malloci, M., Kolesnikov, A., et al. (2020). The open +images dataset v4. 128(7):1956–1981. +Kynkäänniemi, T., Karras, T., Laine, S., Lehtinen, J., and Aila, T. (2019). +Improved precision and recall metric for assessing generative models. +Law, S., Paige, B., and Russell, C. (2019). Take a look around. 10(5):1–19. +Lazaridou, A., Pham, N. T., and Baroni, M. (2015). Combining language and +vision with a multimodal skip-gram model. +LeCun, Y. (2022). A path towards autonomous machine intelligence version +0.9. 2, 2022-06-27. +Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoy- +anov, V., and Zettlemoyer, L. (2020). BART: Denoising sequence-to-sequence +pre-training for natural language generation, translation, and comprehen- +sion. In Proceedings of the 58th Annual Meeting of the Association for + +7.0 Bibliography +253 +Computational Linguistics, pages 7871–7880. Association for Computational +Linguistics. +Lewkowycz, A., Andreassen, A., Dohan, D. M., Dyer, E. S., Michalewski, H., +Ramasesh, V., Slone, A., Anil, C., Schlag, I., Gutman-Solo, T., Wu, Y., +Neyshabur, B., Gur-Ari, G., and Misra, V. (2022). Solving quantitative +reasoning problems with language models. +Lialin, V., Zhao, K., Shivagunde, N., and Rumshisky, A. (2022). Life after +bert: What do other muppets understand about language? +Lin, C.-Y. (2004). +ROUGE: A package for automatic evaluation of sum- +maries. In Text Summarization Branches Out, pages 74–81. Association for +Computational Linguistics. +Lin, T.-Y., Maire, M., Belongie, S., Bourdev, L., Girshick, R., Hays, J., Perona, +P., Ramanan, D., Zitnick, C. L., and Dollár, P. (2014a). Microsoft coco: +Common objects in context. +Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, +P., and Zitnick, C. L. (2014b). Microsoft coco: Common objects in context. +In European conference on computer vision, pages 740–755. Springer. +Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, +P., and Zitnick, C. L. (2014c). Microsoft coco: Common objects in context. +In Computer Vision – ECCV 2014, pages 740–755. Springer International +Publishing. +Lin, Y., Tan, Y. C., and Frank, R. (2019). Open sesame: Getting inside +BERT’s linguistic knowledge. In Proceedings of the 2019 ACL Workshop +BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Associ- +ation for Computational Linguistics. +Liu, N. F., Gardner, M., Belinkov, Y., Peters, M. E., and Smith, N. A. (2019a). +Linguistic knowledge and transferability of contextual representations. +Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., +Zettlemoyer, L., and Stoyanov, V. (2019b). Roberta: A robustly optimized +bert pretraining approach. +Lottick, K., Susai, S., Friedler, S. A., and Wilson, J. P. (2019). Energy usage +reports: Environmental awareness as part of algorithmic accountability. +Lu, J., Batra, D., Parikh, D., and Lee, S. (2019a). Vilbert: Pretraining task- +agnostic visiolinguistic representations for vision-and-language tasks. +Lu, J., Batra, D., Parikh, D., and Lee, S. (2019b). Vilbert: Pretraining task- +agnostic visiolinguistic representations for vision-and-language tasks. 32. +Lu, Y., Zhu, W., Wang, X. E., Eckstein, M., and Wang, W. Y. (2022). +Imagination-augmented natural language understanding. + +254 +7 Bibliography +Mahajan, D., Girshick, R., Ramanathan, V., He, K., Paluri, M., Li, Y., +Bharambe, A., and Van Der Maaten, L. (2018). Exploring the limits of +weakly supervised pretraining. In Proceedings of the European conference +on computer vision (ECCV), pages 181–196. +Manning, +C., +Goldie, +A., +and +Hewitt, +J. +(2022). +Stanford +cs224n: +Natural +language +processing +with +deep +learning. +https://web.stanford.edu/class/cs224n/slides/. +Mayer, T. and Cysouw, M. (2014). Creating a massively parallel bible corpus. +135(273):40. +Mccormack, J. and Gambardella, C. C. (2022). Growing and evolving 3-d +prints. 26(1):88–99. +MICHAEL BARTHEL, GALEN STOCKING, J. H. and MITCHELL, A. +(2016). Reddit news users more likely to be male, young and digital in their +news preferences. +Midjourney (2022). Midjourney. https://www.midjourney.com/. Accessed: +2022-09-12. +Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a). Efficient estimation +of word representations in vector space. +Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013b). Efficient estimation +of word representations in vector space. +Mikolov, T., Le, Q. V., and Sutskever, I. (2013c). Exploiting similarities among +languages for machine translation. +Mikolov, T., Sutskever, I., Chen, K., Corrado, G., and Dean, J. (2013d). +Distributed representations of words and phrases and their compositionality. +Mineault, P. (2021). Unsupervised models of the brain. +Mircosoft (2019). Evaluate:detection. +Mishkin, P., Ahmad, L., Brundage, M., Krueger, G., and Sastry, G. (2022). +Dall·e 2 preview - risks and limitations. +Mordvintsev, A. (2015). Inceptionism: Going deeper into neural networks. +Mustafa, B., Riquelme, C., Puigcerver, J., Jenatton, R., and Houlsby, N. (2022). +Multimodal contrastive learning with limoe: the language-image mixture of +experts. +Nagrani, A., Yang, S., Arnab, A., Jansen, A., Schmid, C., and Sun, C. (2021). +Attention bottlenecks for multimodal fusion. In Ranzato, M., Beygelzimer, +A., Dauphin, Y. N., Liang, P., and Vaughan, J. W., editors, Advances in +Neural Information Processing Systems 34: Annual Conference on Neural + +7.0 Bibliography +255 +Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, +virtual, pages 14200–14213. +Nichol, A., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., McGrew, B., +Sutskever, I., and Chen, M. (2021a). Glide: Towards photorealistic image +generation and editing with text-guided diffusion models. +Nichol, A., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., McGrew, B., +Sutskever, I., and Chen, M. (2021b). GLIDE: towards photorealistic image +generation and editing with text-guided diffusion models. +OpenAI (2021). Dall-e. +Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002). Bleu: a method +for automatic evaluation of machine translation. In Proceedings of the 40th +Annual Meeting of the Association for Computational Linguistics, pages +311–318. Association for Computational Linguistics. +Parcalabescu, L., Cafagna, M., Muradjan, L., Frank, A., Calixto, I., and Gatt, +A. (2022). VALSE: A task-independent benchmark for vision and language +models centered on linguistic phenomena. In Proceedings of the 60th Annual +Meeting of the Association for Computational Linguistics (Volume 1: Long +Papers), pages 8253–8280. Association for Computational Linguistics. +Patashnik, O., Wu, Z., Shechtman, E., Cohen-Or, D., and Lischinski, D. (2021). +Styleclip: Text-driven manipulation of stylegan imagery. +Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors +for word representation. In Proceedings of the 2014 conference on empirical +methods in natural language processing (EMNLP), pages 1532–1543. +Perez, E., Kiela, D., and Cho, K. (2021a). True few-shot learning with language +models. +Perez, E., Kiela, D., and Cho, K. (2021b). True few-shot learning with language +models. 34:11054–11070. +Pezzelle, S., Takmaz, E., and Fernández, R. (2021). Word representation +learning in multimodal pre-trained transformers: An intrinsic evaluation. +9:1563–1579. +Pilehvar, M. T. and Camacho-Collados, J. (2021). Embeddings in Natural +Language Processing. Springer International Publishing. +Pont-Tuset, J., Uijlings, J., Changpinyo, S., Soricut, R., and Ferrari, V. (2020). +Connecting vision and language with localized narratives. In European +conference on computer vision, pages 647–664. Springer. +Prabhu, V. U. and Birhane, A. (2020). Large image datasets: A pyrrhic win +for computer vision? +Pölsterl, S., Sarasua, I., Gutiérrez-Becker, B., and Wachinger, C. (2019). A wide + +256 +7 Bibliography +and deep neural network for survival analysis from anatomical shape and +tabular clinical data. In Joint European Conference on Machine Learning +and Knowledge Discovery in Databases, pages 453–464. Springer. +Qiao, H., Liu, V., and Chilton, L. (2022). Initial images: Using image prompts to +improve subject representation in multimodal ai generated art. In Creativity +and Cognition, pages 15–28. +Qiao, T., Zhang, J., Xu, D., and Tao, D. (2019). Mirrorgan: Learning text-to- +image generation by redescription. +R Core Team (2018). R: A Language and Environment for Statistical Comput- +ing. R Foundation for Statistical Computing. +Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, +G., Askell, A., Mishkin, P., Clark, J., et al. (2021a). Learning transferable +visual models from natural language supervision. In International Conference +on Machine Learning, pages 8748–8763. PMLR. +Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, +G., Askell, A., Mishkin, P., Clark, J., Krueger, G., and Sutskever, I. (2021b). +Learning transferable visual models from natural language supervision. +Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, +G., Askell, A., Mishkin, P., Clark, J., Krueger, G., and Sutskever, I. (2021c). +Learning transferable visual models from natural language supervision. In +Meila, M. and Zhang, T., editors, Proceedings of the 38th International +Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual +Event, volume 139 of Proceedings of Machine Learning Research, pages +8748–8763. PMLR. +Radford, A., Narasimhan, K., Salimans, T., and Sutskever, I. (2018). Improving +language understanding by generative pre-training. +Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. (2019a). +Language models are unsupervised multitask learners. +Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al. +(2019b). Language models are unsupervised multitask learners. 1(8):9. +Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, +Y., Li, W., and Liu, P. J. (2019a). Exploring the limits of transfer learning +with a unified text-to-text transformer. +Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, +Y., Li, W., and Liu, P. J. (2019b). Exploring the limits of transfer learning +with a unified text-to-text transformer. +Raghu, M., Poole, B., Kleinberg, J., Ganguli, S., and Sohl-Dickstein, J. (2016). +On the expressive power of deep neural networks. + +7.0 Bibliography +257 +Rajpurkar, P., Jia, R., and Liang, P. (2018). Know what you don’t know: +Unanswerable questions for squad. +Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. (2016). Squad: 100,000+ +questions for machine comprehension of text. +Ramachandran, P., Parmar, N., Vaswani, A., Bello, I., Levskaya, A., and +Shlens, J. (2019). +Stand-alone self-attention in vision models. +CoRR, +abs/1906.05909. +Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., and Chen, M. (2022a). Hierar- +chical text-conditional image generation with clip latents. +Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., and Chen, M. (2022b). Hierar- +chical text-conditional image generation with clip latents. 2022. +Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., +and Sutskever, I. (2021a). Zero-shot text-to-image generation. +Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., +and Sutskever, I. (2021b). Zero-shot text-to-image generation. +Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., +and Sutskever, I. (2021c). Zero-shot text-to-image generation. In Meila, M. +and Zhang, T., editors, Proceedings of the 38th International Conference on +Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 +of Proceedings of Machine Learning Research, pages 8821–8831. PMLR. +Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., +and Sutskever, I. (2021d). Zero-shot text-to-image generation. In Meila, M. +and Zhang, T., editors, Proceedings of the 38th International Conference on +Machine Learning, volume 139 of Proceedings of Machine Learning Research, +pages 8821–8831. PMLR. +Rebuffi, S.-A., Bilen, H., and Vedaldi, A. (2017). Learning multiple visual +domains with residual adapters. In Guyon, I., Luxburg, U. V., Bengio, +S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, +Advances in Neural Information Processing Systems, volume 30. Curran +Associates, Inc. +Recht, B., Roelofs, R., Schmidt, L., and Shankar, V. (2019). Do imagenet +classifiers generalize to imagenet? In International Conference on Machine +Learning, pages 5389–5400. PMLR. +Reed, S., Zolna, K., Parisotto, E., Colmenarejo, S. G., Novikov, A., Barth- +Maron, G., Gimenez, M., Sulsky, Y., Kay, J., Springenberg, J. T., Eccles, +T., Bruce, J., Razavi, A., Edwards, A., Heess, N., Chen, Y., Hadsell, R., +Vinyals, O., Bordbar, M., and de Freitas, N. (2022). A generalist agent. +Reed, S. E., Akata, Z., Mohan, S., Tenka, S., Schiele, B., and Lee, H. (2016a). +Learning what and where to draw. + +258 +7 Bibliography +Reed, S. E., Akata, Z., Schiele, B., and Lee, H. (2016b). +Learning deep +representations of fine-grained visual descriptions. +Reed, S. E., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., and Lee, H. +(2016c). Generative adversarial text to image synthesis. +Ren, S., He, K., Girshick, R., and Sun, J. (2015). Faster r-cnn: Towards +real-time object detection with region proposal networks. 28. +Rennie, S. J., Marcheret, E., Mroueh, Y., Ross, J., and Goel, V. (2017). Self- +critical sequence training for image captioning. In 2017 IEEE Conference +on Computer Vision and Pattern Recognition (CVPR), pages 1179–1195. +Ribeiro, M. T., Wu, T., Guestrin, C., and Singh, S. (2020). Beyond accuracy: +Behavioral testing of nlp models with checklist. +Riquelme, C., Puigcerver, J., Mustafa, B., Neumann, M., Jenatton, R., Su- +sano Pinto, A., Keysers, D., and Houlsby, N. (2021). Scaling vision with +sparse mixture of experts. In Ranzato, M., Beygelzimer, A., Dauphin, Y., +Liang, P., and Vaughan, J. W., editors, Advances in Neural Information +Processing Systems, volume 34, pages 8583–8595. Curran Associates, Inc. +Ritchie, H., Roser, M., and Rosado, P. (2020). co2 and greenhouse gas emissions. +https://ourworldindata.org/co2-and-other-greenhouse-gas-emissions. +Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. (2021). +High-resolution image synthesis with latent diffusion models. +Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. (2022). +Stablediffusion. https://github.com/CompVis/stable-diffusion. Accessed: +2022-09-12. +Rosset, C. (2020). Turing-nlg: A 17-billion-parameter language model by +microsoft. 1(2). +Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., +Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. (2015). +Imagenet large scale visual recognition challenge. 115(3):211–252. +Rügamer, D., Kolb, C., and Klein, N. (2020). Semi-structured deep distribu- +tional regression: Combining structured additive models and deep learning. +Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E., Ghasemipour, +S. K. S., Ayan, B. K., Mahdavi, S. S., Lopes, R. G., Salimans, T., Ho, J., +Fleet, D. J., and Norouzi, M. (2022a). Photorealistic text-to-image diffusion +models with deep language understanding. +Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E., Ghasemipour, +S. K. S., Ayan, B. K., Mahdavi, S. S., Lopes, R. G., Salimans, T., Ho, J., +Fleet, D. J., and Norouzi, M. (2022b). Photorealistic text-to-image diffusion +models with deep language understanding. + +7.0 Bibliography +259 +Saifee, M. (2020). +Gpt-3: The new mighty language model from ope- +nai. https://towardsdatascience.com/gpt-3-the-new-mighty-language-model- +from-openai-a74ff35346fc. +Sajjadi, M. S. M., Bachem, O., Lucic, M., Bousquet, O., and Gelly, S. (2018). +Assessing generative models via precision and recall. +Salimans, T., Goodfellow, I. J., Zaremba, W., Cheung, V., Radford, A., and +Chen, X. (2016). Improved techniques for training gans. +Schick, T. and Schütze, H. (2020). Exploiting cloze questions for few shot text +classification and natural language inference. +Schuhmann, C. (2022). Laion-400-million open dataset. +Schuhmann, C., Vencu, R., Beaumont, R., Kaczmarczyk, R., Mullis, C., Katta, +A., Coombes, T., Jitsev, J., and Komatsuzaki, A. (2021a). Laion-400m: +Open dataset of clip-filtered 400 million image-text pairs. +Schuhmann, C., Vencu, R., Beaumont, R., Kaczmarczyk, R., Mullis, C., Katta, +A., Coombes, T., Jitsev, J., and Komatsuzaki, A. (2021b). LAION-400M: +open dataset of clip-filtered 400 million image-text pairs. +Sejnowski, T. J. (2020). The unreasonable effectiveness of deep learning in +artificial intelligence. +Sennrich, R., Haddow, B., and Birch, A. (2015a). Neural machine translation +of rare words with subword units. +Sennrich, R., Haddow, B., and Birch, A. (2015b). Neural machine translation +of rare words with subword units. +Sennrich, R., Haddow, B., and Birch, A. (2016). Neural machine translation of +rare words with subword units. In Proceedings of the 54th Annual Meeting +of the Association for Computational Linguistics (Volume 1: Long Papers), +pages 1715–1725. Association for Computational Linguistics. +Shah, D. (2022). Self-Supervised Learning and Its Applications - neptune.ai. +[Online; accessed 2022-06-29]. +Shao, S., Li, Z., Zhang, T., Peng, C., Yu, G., Zhang, X., Li, J., and Sun, J. +(2019). Objects365: A large-scale, high-quality dataset for object detection. +In Proceedings of the IEEE/CVF international conference on computer +vision, pages 8430–8439. +Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., and +Dean, J. (2017). Outrageously large neural networks: The sparsely-gated +mixture-of-experts layer. +Shekhar, R., Pezzelle, S., Klimovich, Y., Herbelot, A., Nabi, M., Sangineto, +E., and Bernardi, R. (2017). Foil it! find one mismatch between image and +language caption. + +260 +7 Bibliography +Shen, S., Li, L. H., Tan, H., Bansal, M., Rohrbach, A., Chang, K.-W., Yao, +Z., and Keutzer, K. (2021). How much can clip benefit vision-and-language +tasks? +Sheng, E., Chang, K.-W., Natarajan, P., and Peng, N. (2019). The woman +worked as a babysitter: On biases in language generation. +Shonenkov, A. (2021). rudall-e. +Shvetsova, N., Chen, B., Rouditchenko, A., Thomas, S., Kingsbury, B., Feris, +R., Harwath, D., Glass, J., and Kuehne, H. (2021). Everything at once - +multi-modal fusion transformer for video retrieval. +Sikarwar, A. and Kreiman, G. (2022). On the efficacy of co-attention trans- +former layers in visual question answering. +Silberer, C. and Lapata, M. (2014). Learning grounded meaning representations +with autoencoders. +In Proceedings of the 52nd Annual Meeting of the +Association for Computational Linguistics (Volume 1: Long Papers), pages +721–732. +Simonyan, K. and Zisserman, A. (2014). Very deep convolutional networks for +large-scale image recognition. arXiv preprint arXiv:1409.1556. +Singh, A., Hu, R., Goswami, V., Couairon, G., Galuba, W., Rohrbach, M., +and Kiela, D. (2022). Flava: A foundational language and vision alignment +model. In Proceedings of the IEEE/CVF Conference on Computer Vision +and Pattern Recognition, pages 15638–15650. +Sirko, W., Kashubin, S., Ritter, M., Annkah, A., Bouchareb, Y. S. E., +Dauphin, Y. N., Keysers, D., Neumann, M., Cissé, M., and Quinn, J. (2021). +Continental-scale building detection from high resolution satellite imagery. +Snell, C. (2021). Understanding vq-vae. https://ml.berkeley.edu/blog/posts/vq- +vae/. Accessed: 2022-09-12. +Socher, R. and Fei-fei, L. (2010). Connecting modalities: Semi-supervised +segmentation and annotation of images using unaligned text corpora. In +In IEEE Computer Society Conference on Computer Vision and Pattern +Recognition. +Soderlund, J. and Blair, A. (2018). Adversarial image generation using evolution +and deep learning. In 2018 IEEE Congress on Evolutionary Computation +(CEC), pages 1–8. +Sohl-Dickstein, J., Weiss, E. A., Maheswaranathan, N., and Ganguli, S. (2015). +Deep unsupervised learning using nonequilibrium thermodynamics. +Srinivasan, K., Raman, K., Chen, J., Bendersky, M., and Najork, M. (2021). +Wit: Wikipedia-based image text dataset for multimodal multilingual ma- +chine learning. In Proceedings of the 44th International ACM SIGIR Con- + +7.0 Bibliography +261 +ference on Research and Development in Information Retrieval, pages 2443– +2449. +Srinivasan, R. and Uchino, K. (2021). Biases in generative art: A causal look +from the lens of art history. In Proceedings of the 2021 ACM Conference on +Fairness, Accountability, and Transparency, pages 41–51. +Srivastava, A., Rastogi, A., Rao, A., Shoeb, A. A. M., Abid, A., Fisch, A., +Brown, A. R., Santoro, A., Gupta, A., Garriga-Alonso, A., et al. (2022). +Beyond the imitation game: Quantifying and extrapolating the capabilities +of language models. +Steiner, A., Kolesnikov, A., Zhai, X., Wightman, R., Uszkoreit, J., and Beyer, +L. (2021). How to train your vit? data, augmentation, and regularization in +vision transformers. +Strubell, E., Ganesh, A., and McCallum, A. (2019a). +Energy and policy +considerations for deep learning in NLP. In Proceedings of the 57th Annual +Meeting of the Association for Computational Linguistics, pages 3645–3650. +Association for Computational Linguistics. +Strubell, E., Ganesh, A., and McCallum, A. (2019b). +Energy and policy +considerations for deep learning in nlp. +Strubell, E., Ganesh, A., and McCallum, A. (2019c). +Energy and policy +considerations for deep learning in nlp. +Sulubacak, U., Caglayan, O., Grönroos, S., Rouhe, A., Elliott, D., Specia, L., +and Tiedemann, J. (2020). Multimodal machine translation through visuals +and speech. 34(2-3):97–147. +Sun, C., Shrivastava, A., Singh, S., and Gupta, A. (2017). Revisiting unreason- +able effectiveness of data in deep learning era. In Proceedings of the IEEE +international conference on computer vision, pages 843–852. +Sun, Q., Wang, Y., Xu, C., Zheng, K., Yang, Y., Hu, H., Xu, F., Zhang, J., +Geng, X., and Jiang, D. (2021). Multimodal dialogue response generation. +Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning +with neural networks. +Sutton, R. S. (2019). The bitter lesson. +Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2015). Re- +thinking the inception architecture for computer vision. +Tan, H. and Bansal, M. (2020). Vokenization: Improving language understand- +ing with contextualized, visual-grounded supervision. +Tan, M. and Le, Q. V. (2019a). Efficientnet: Rethinking model scaling for +convolutional neural networks. + +262 +7 Bibliography +Tan, M. and Le, Q. V. (2019b). Efficientnet: Rethinking model scaling for +convolutional neural networks. CoRR, abs/1905.11946. +Tao, M., Tang, H., Wu, S., Sebe, N., Wu, F., and Jing, X. (2020). DF-GAN: +deep fusion generative adversarial networks for text-to-image synthesis. +techslang (2020). What is Self-Supervised Learning? — Definition by Techslang. +[Online; accessed 2022-06-29]. +Theis, L., Oord, A. v. d., and Bethge, M. (2015). A note on the evaluation of +generative models. +Tian, Y., Krishnan, D., and Isola, P. (2020). Contrastive multiview coding. In +European conference on computer vision, pages 776–794. Springer. +Tiu, E. (2021). Understanding Contrastive Learning | by Ekin Tiu | Towards +Data Science. [Online; accessed 2022-06-29]. +Tong, C., Li, J., Lang, C., Kong, F., Niu, J., and Rodrigues, J. J. (2018). An +efficient deep model for day-ahead electricity load forecasting with stacked +denoising auto-encoders. 117:267–273. +Torralba, A. and Efros, A. A. (2011). Unbiased look at dataset bias. In CVPR +2011, pages 1521–1528. IEEE. +Uppal, S., Bhagat, S., Hazarika, D., Majumder, N., Poria, S., Zimmermann, +R., and Zadeh, A. (2022). Multimodal research in vision and language: A +review of current and emerging trends. 77:149–171. +Vale-Silva, L. A. and Rohr, K. (2021). Long-term cancer survival prediction +using multimodal deep learning. 11(1):1–12. +van den Oord, A., Vinyals, O., and Kavukcuoglu, K. (2017). Neural discrete +representation learning. +Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., +Kaiser, L., and Polosukhin, I. (2017a). Attention is all you need. In Guyon, I., +Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and +Garnett, R., editors, Advances in Neural Information Processing Systems, +volume 30. Curran Associates, Inc. +Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., +Kaiser, L., and Polosukhin, I. (2017b). Attention is all you need. 30. +Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., +Kaiser, L., and Polosukhin, I. (2017c). Attention is all you need. +Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., +Kaiser, L., and Polosukhin, I. (2017d). Attention is all you need. In Guyon, I., +Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and +Garnett, R., editors, Advances in Neural Information Processing Systems, +volume 30. Curran Associates, Inc. + +7.0 Bibliography +263 +Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., +Kaiser, L., and Polosukhin, I. (2017e). Attention is all you need. +Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., +Kaiser, L., and Polosukhin, I. (2017f). Attention is all you need. In Guyon, +I., von Luxburg, U., Bengio, S., Wallach, H. M., Fergus, R., Vishwanathan, S. +V. N., and Garnett, R., editors, Advances in Neural Information Processing +Systems 30: Annual Conference on Neural Information Processing Systems +2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. +Vedantam, R., Zitnick, C. L., and Parikh, D. (2015). Cider: Consensus-based +image description evaluation. In 2015 IEEE Conference on Computer Vision +and Pattern Recognition (CVPR), pages 4566–4575. +Vinyals, O., Toshev, A., Bengio, S., and Erhan, D. (2015). Show and tell: A +neural image caption generator. pages 3156–3164. +Voita, E., Talbot, D., Moiseev, F., Sennrich, R., and Titov, I. (2019). Analyzing +multi-head self-attention: Specialized heads do the heavy lifting, the rest +can be pruned. +Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. (2018). +Glue: A multi-task benchmark and analysis platform for natural language +understanding. +Wang, J. and Li, S. (2018). Detection and classification of acoustic scenes and +events 2018 self-attention mechanism based system for dcase2018 challenge +task1 and task4. +Wang, P., Yang, A., Men, R., Lin, J., Bai, S., Li, Z., Ma, J., Zhou, C., +Zhou, J., and Yang, H. (2022). OFA: Unifying architectures, tasks, and +modalities through a simple sequence-to-sequence learning framework. In +Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., and Sabato, +S., editors, Proceedings of the 39th International Conference on Machine +Learning, volume 162 of Proceedings of Machine Learning Research, pages +23318–23340. PMLR. +Wang, Q., Li, R., Wang, Q., and Chen, S. (2021). Non-fungible token (nft): +Overview, evaluation, opportunities and challenges. +Website (2020). Localized narratives data and visualization. +Wei, C., Fan, H., Xie, S., Wu, C.-Y., Yuille, A., and Feichtenhofer, C. (2022). +Masked feature prediction for self-supervised visual pre-training. In Pro- +ceedings of the IEEE/CVF Conference on Computer Vision and Pattern +Recognition, pages 14668–14678. +Weng, L. (2018). From autoencoder to beta-vae. +Weng, L. (2021). What are diffusion models? + +264 +7 Bibliography +Wenzek, G., Lachaux, M.-A., Conneau, A., Chaudhary, V., Guzmán, F., Joulin, +A., and Grave, E. (2019). +Ccnet: Extracting high quality monolingual +datasets from web crawl data. +Wu, C., Liang, J., Ji, L., Yang, F., Fang, Y., Jiang, D., and Duan, N. (2021). +NÜwa: Visual synthesis pre-training for neural visual world creation. +WZRD (2020). Wzrd. +Xiao, J., Hays, J., Ehinger, K. A., Oliva, A., and Torralba, A. (2010). Sun +database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE +Computer Society Conference on Computer Vision and Pattern Recognition, +pages 3485–3492. +Xie, Q., Hovy, E. H., Luong, M., and Le, Q. V. (2019). Self-training with noisy +student improves imagenet classification. CoRR, abs/1911.04252. +Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., Zemel, +R., and Bengio, Y. (2015). Show, attend and tell: Neural image caption +generation with visual attention. +Xu, T., Zhang, P., Huang, Q., Zhang, H., Gan, Z., Huang, X., and He, X. +(2017). Attngan: Fine-grained text to image generation with attentional +generative adversarial networks. +Xue, L., Constant, N., Roberts, A., Kale, M., Al-Rfou, R., Siddhant, A., +Barua, A., and Raffel, C. (2020). mt5: A massively multilingual pre-trained +text-to-text transformer. +Yang, X., Tang, K., Zhang, H., and Cai, J. (2019). Auto-encoding scene +graphs for image captioning. In Proceedings of the IEEE/CVF Conference +on Computer Vision and Pattern Recognition (CVPR). +Yann, L. and Ishan, M. (2021). Self-supervised learning: The dark matter of +intelligence. +Yao, B. Z., Yang, X., Lin, L., Lee, M. W., and Zhu, S.-C. (2010). I2t: Image +parsing to text description. 98(8):1485–1508. +Yao, J., Zhu, X., Zhu, F., and Huang, J. (2017). Deep correlational learning +for survival prediction from multi-modality data. In Medical Image Com- +puting and Computer-Assisted Intervention - MICCAI 2017, pages 406–414. +Springer International Publishing. +Yao, T., Pan, Y., Li, Y., and Mei, T. (2018a). Exploring visual relationship +for image captioning. +Yao, T., Pan, Y., Li, Y., and Mei, T. (2018b). Exploring visual relationship +for image captioning. +You, J., Li, X., Low, M., Lobell, D., and Ermon, S. (2017). Deep gaussian +process for crop yield prediction based on remote sensing data. In Proceedings + +7.0 Bibliography +265 +of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI’17, +pages 4559–4565. AAAI Press. +Young, P., Lai, A., Hodosh, M., and Hockenmaier, J. (2014). From image +descriptions to visual denotations: New similarity metrics for semantic +inference over event descriptions. 2:67–78. +Yu, J., Li, X., Koh, J. Y., Zhang, H., Pang, R., Qin, J., Ku, A., Xu, Y., +Baldridge, J., and Wu, Y. (2021). Vector-quantized image modeling with +improved VQGAN. +Yu, J., Xu, Y., Koh, J., Luong, T., Baid, G., Vasudevan, V., Ku, A., Yang, Y., +Ayan, B., Hutchinson, B., Han, W., Parekh, Z., Li, X., Zhang, H., Baldridge, +J., and Wu, Y. (2022a). Scaling autoregressive models for content-rich +text-to-image generation. +Yu, J., Xu, Y., Koh, J. Y., Luong, T., Baid, G., Wang, Z., Vasudevan, V., +Ku, A., Yang, Y., Ayan, B. K., Hutchinson, B., Han, W., Parekh, Z., Li, +X., Zhang, H., Baldridge, J., and Wu, Y. (2022b). Scaling autoregressive +models for content-rich text-to-image generation. +Yuan, L., Chen, D., Chen, Y.-L., Codella, N., Dai, X., Gao, J., Hu, H., Huang, +X., Li, B., Li, C., et al. (2021). Florence: A new foundation model for +computer vision. +Yuan, S., Shuai, Z., Jiahong, L., Zhao, X., Hanyu, Z., and Jie, T. (2022). +Wudaomm: A large-scale multi-modal dataset for pre-training models. +Zagoruyko, S. and Komodakis, N. (2016). Wide residual networks. CoRR, +abs/1605.07146. +Zellers, R., Bisk, Y., Farhadi, A., and Choi, Y. (2019). From recognition to +cognition: Visual commonsense reasoning. In Proceedings of the IEEE/CVF +conference on computer vision and pattern recognition, pages 6720–6731. +Zellers, R., Bisk, Y., Schwartz, R., and Choi, Y. (2018). Swag: A large-scale +adversarial dataset for grounded commonsense inference. +Zeng, A., Wong, A., Welker, S., Choromanski, K., Tombari, F., Purohit, A., +Ryoo, M., Sindhwani, V., Lee, J., Vanhoucke, V., et al. (2022). Socratic +models: Composing zero-shot multimodal reasoning with language. +Zhang, C., Yang, Z., He, X., and Deng, L. (2020a). Multimodal intelligence: +Representation learning, information fusion, and applications. 14(3):478–493. +Zhang, H., Xu, T., Li, H., Zhang, S., Huang, X., Wang, X., and Metaxas, D. N. +(2016a). Stackgan: Text to photo-realistic image synthesis with stacked +generative adversarial networks. +Zhang, P., Goyal, Y., Summers-Stay, D., Batra, D., and Parikh, D. (2016b). Yin +and yang: Balancing and answering binary visual questions. In Proceedings + +266 +7 Bibliography +of the IEEE conference on computer vision and pattern recognition, pages +5014–5022. +Zhang, Y., Jiang, H., Miura, Y., Manning, C. D., and Langlotz, C. P. (2020b). +Contrastive learning of medical visual representations from paired images +and text. +Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., and Torralba, A. (2017). +Scene parsing through ade20k dataset. In Proceedings of the IEEE conference +on computer vision and pattern recognition, pages 633–641. +Zhou, Y., Roy, S., Abdolrashidi, A., Wong, D., Ma, P., Xu, Q., Liu, H., +Phothilimthana, P. M., Wang, S., Goldie, A., Mirhoseini, A., and Laudon, J. +(2020). Transferable graph optimizers for ml compilers. +Zhou, Y., Zhang, R., Chen, C., Li, C., Tensmeyer, C., Yu, T., Gu, J., Xu, J., and +Sun, T. (2021). LAFITE: towards language-free training for text-to-image +generation. +Zhu, M., Pan, P., Chen, W., and Yang, Y. (2019). DM-GAN: dynamic memory +generative adversarial networks for text-to-image synthesis. +Zhu, X., Yao, J., and Huang, J. (2016). Deep convolutional neural network +for survival analysis with pathological images. In 2016 IEEE International +Conference on Bioinformatics and Biomedicine (BIBM), pages 544–547. +IEEE. +Zhu, Y., Kiros, R., Zemel, R., Salakhutdinov, R., Urtasun, R., Torralba, A., +and Fidler, S. (2015). Aligning books and movies: Towards story-like visual +explanations by watching movies and reading books. In Proceedings of the +IEEE international conference on computer vision, pages 19–27. +Zhuang, C., Yan, S., Nayebi, A., Schrimpf, M., Frank, M. C., DiCarlo, J. J., +and Yamins, D. L. (2021). Unsupervised neural network models of the +ventral visual stream. 118(3):e2014196118. + diff --git a/6tE4T4oBgHgl3EQfCAsz/content/tmp_files/load_file.txt b/6tE4T4oBgHgl3EQfCAsz/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..0ca2f22ca5ff22fc691d1dad8821b8e213f55962 --- /dev/null +++ b/6tE4T4oBgHgl3EQfCAsz/content/tmp_files/load_file.txt @@ -0,0 +1,10540 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf,len=10539 +page_content='Multimodal Deep Learning arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='04856v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='CL] 12 Jan 2023 Contents Preface v Foreword 1 1 Introduction 3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Introduction to Multimodal Deep Learning .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Outline of the Booklet .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 4 2 Introducing the modalities 7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 State-of-the-art in NLP .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 9 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 State-of-the-art in Computer Vision .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 33 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Resources and Benchmarks for NLP, CV and multimodal tasks 54 3 Multimodal architectures 83 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Image2Text .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 86 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Text2Image .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 100 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Images supporting Language Models .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 125 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Text supporting Vision Models .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 146 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 Models for both modalities .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 159 4 Further Topics 181 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Including Further Modalities .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 181 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Structured + Unstructured Data .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 197 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Multipurpose Models .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 209 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Generative Art .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 226 5 Conclusion 235 6 Epilogue 237 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 New influential architectures .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 237 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Creating videos .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 238 7 Acknowledgements 239 iii Preface Author: Matthias Aßenmacher FIGURE 1: LMU seal (left) style-transferred to Van Gogh’s Sunflower painting (center) and blended with the prompt - Van Gogh, sunflowers - via CLIP+VGAN (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the last few years, there have been several breakthroughs in the methodolo- gies used in Natural Language Processing (NLP) as well as Computer Vision (CV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Beyond these improvements on single-modality models, large-scale multi- modal approaches have become a very active area of research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this seminar, we reviewed these approaches and attempted to create a solid overview of the field, starting with the current state-of-the-art approaches in the two subfields of Deep Learning individually.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Further, modeling frameworks are discussed where one modality is transformed into the other Chapter 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 and Chapter 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2), as well as models in which one modality is utilized to enhance representation learning for the other (Chapter 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 and Chapter 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To conclude the second part, architectures with a focus on handling both modalities simultaneously are introduced (Chapter 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Finally, we also cover other modalities (Chapter 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 and Chapter 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2) as well as general-purpose multi-modal models (Chapter 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3), which are able to handle different tasks on different modalities within one unified architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One interesting application (Generative Art, Chapter 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4) eventually caps off this booklet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' v vi Preface FIGURE 2: Creative Commons License This book is licensed under the Creative Commons Attribution- NonCommercial-ShareAlike 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 International License.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' cC 000Foreword Author: Matthias Aßenmacher This book is the result of an experiment in university teaching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' We were inspired by a group of other PhD Students around Christoph Molnar, who conducted another seminar on Interpretable Machine Learning in this format.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Instead of letting every student work on a seminar paper, which more or less isolated from the other students, we wanted to foster collaboration between the students and enable them to produce a tangible outout (that isn’t written to spend the rest of its time in (digital) drawers).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the summer term 2022, some Statistics, Data Science and Computer Science students signed up for our seminar entitled “Multimodal Deep Learning” and had (before kick-off meeting) no idea what they had signed up for: Having written an entire book by the end of the semester.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' We were bound by the examination rules for conducting the seminar, but otherwise we could deviate from the traditional format.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' We deviated in several ways: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Each student project is a chapter of this booklet, linked contentwise to other chapers since there’s partly a large overlap between the topics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' We gave challenges to the students, instead of papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The challenge was to investigate a specific impactful recent model or method from the field of NLP, Computer Vision or Multimodal Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' We designed the work to live beyond the seminar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' We emphasized collaboration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Students wrote the introduction to chapters in teams and reviewed each others individual texts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Technical Setup The book chapters are written in the Markdown language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The simulations, data examples and visualizations were created with R (R Core Team, 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To combine R-code and Markdown, we used rmarkdown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The book was compiled 1 2 0 Foreword with the bookdown package.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' We collaborated using git and github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For details, head over to the book’s repository.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 1 Introduction Author: Nadja Sauter Supervisor: Matthias Aßenmacher 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Introduction to Multimodal Deep Learning There are five basic human senses: hearing, touch, smell, taste and sight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Possessing these five modalities, we are able to perceive and understand the world around us.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Thus, “multimodal” means to combine different channels of information simultaneously to understand our surroundings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example, when toddlers learn the word “cat”, they use different modalities by saying the word out loud, pointing on cats and making sounds like “meow”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Using the human learning process as a role model, artificial intelligence (AI) researchers also try to combine different modalities to train deep learning models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On a superficial level, deep learning algorithms are based on a neural network that is trained to optimize some objective which is mathematically defined via the so-called loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The optimization, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' minimizing the loss, is done via a numerical procedure called gradient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Consequently, deep learning models can only handle numeric input and can only result in a numeric output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, in multimodal tasks we are often confronted with unstructured data like pictures or text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Thus, the first major problem is how to represent the input numerically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The second issue with regard to multimodal tasks is how exactly to combine different modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For instance, a typical task could be to train a deep learning model to generate a picture of a cat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' First of all, the computer needs to understand the text input “cat” and then somehow translate this information into a specific image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore, it is necessary to identify the contextual relationships between words in the text input and the spatial relationships betweent pixels in the image output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' What might be easy for a toddler in pre-school, is a huge challenge for the computer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Both have to learn some understanding of the word “cat” that comprises the meaning and appearance of the animal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A common approach in modern deep learning is to generate embeddings that represent the cat numerically as a vector in some latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, to achieve this, different approaches and algorithmic 3 4 1 Introduction architectures have been developed in recent years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This book gives an overview of the different methods used in state-of-the-art (SOTA) multimodal deep learning to overcome challenges arising from unstructured data and combining inputs of different modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Outline of the Booklet Since multimodal models often use text and images as input or output, methods of Natural Language Processing (NLP) and Computer Vision (CV) are intro- duced as foundation in Chapter 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Methods in the area of NLP try to handle text data, whereas CV deals with image processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With regard to NLP (sub- section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1), one concept of major importance is the so-called word embedding, which is nowadays an essential part of (nearly) all multimodal deep learning architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This concept also sets the foundation for transformer-based models like BERT (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2018a), which achieved a huge improvement in several NLP tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Especially the (self-)attention mechanism (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017a) of transformers revolutionized NLP models, which is why most of them rely on the transformer as a backbone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In Computer Vision (subsection 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2) different network architectures, namely ResNet (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015), EfficientNet (Tan and Le, 2019a), SimCLR (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020a) and BYOL (Grill et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020b), will be introduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In both fields it is of great interest to compare the different approaches and their performance on challenging benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For this reason, the last subsection 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 of Chapter 2 gives an overall overview of different data sets, pre-training tasks and benchmarks for CV as well as for NLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The second Chapter (see 3) focuses on different multimodal architectures, covering a wide variety of how text and images can be combined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The presented models combine and advance different methods of NLP and CV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' First of all, looking at Img2Text tasks (subsection 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1), the data set Microsoft COCO for object recognition (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014a) and the meshed-memory transformer for Image Captioning (M2 Transformer) (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019) will be presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Contrariwise, researchers developed methods to generate pictures based on a short text prompt (subsection 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The first models accomplishing this task were generative adversarial networks (GANs) (Goodfellow et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014b) and Variational Autoencoders (VAEs) (Kingma and Welling, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These methods were improved in recent years and today’s SOTA transformer architectures and text-guided diffusion models like DALL-E (Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a) and GLIDE (Nichol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a) achieve remarkable results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another interesting question is how images can be utilized to support language models (subsection 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This can be done via sequential embeddings, more advanced grounded embeddings or, again, inside transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On the other hand, one can also look at text 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Outline of the Booklet 5 supporting CV models like CLIP (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021b), ALIGN (Jia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a) and Florence (Yuan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) (subsection 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They use foundation models meaning reusing models (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' CLIP inside DALL-E 2) as well as a contrastive loss for connecting text with images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Besides, zero-shooting makes it possible to classify new and unseen data without expensive fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Especially the open-source architecture CLIP (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021b) for image classification and generation attracted a lot of attention last year.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the end of the second chapter, some further architectures to handle text and images simultaneously are introduced (subsection 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For instance, Data2Vec uses the same learning method for speech, vision and language and in this way aims to find a general approach to handle different modalities in one architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Furthermore, VilBert (Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019a) extends the popular BERT architecture to handle both image and text as input by implementing co-attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This method is also used in Google’s Deepmind Flamingo (Alayrac et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In addition, Flamingo aims to tackle multiple tasks with a single visual language model via few-shot learning and freezing the pre-trained vision and language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the last chapter (see 4), methods are introduced that are also able to handle modalities other than text and image, like e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' video, speech or tabular data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The overall goal here is to find a general multimodal architecture based on challenges rather than modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore, one needs to handle problems of multimodal fusion and alignment and decide whether you use a join or coordinated representation (subsection 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Moreover we go more into detail about how exactly to combine structured and unstructured data (subsection 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore, different fusion strategies which evolved in recent years will be presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is illustrated in this book by two use cases in survival analysis and economics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Besides this, another interesting research question is how to tackle different tasks in one so called multi-purpose model (subsection 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3) like it is intended to be created by Google researchers (Barham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022) in their “Pathway” model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Last but not least, we show one exemplary application of Multimodal Deep Learning in the arts scene where image generation models like DALL-E (Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a) are used to create art pieces in the area of Generative Arts (subsection 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2 Introducing the modalities Authors: Cem Akkus, Vladana Djakovic, Christopher Benjamin Marquardt Supervisor: Matthias Aßenmacher Natural Language Processing (NLP) has existed for about 50 years, but it is more relevant than ever.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' There have been several breakthroughs in this branch of machine learning that is concerned with spoken and written language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example, learning internal representations of words was one of the greater advances of the last decade.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Word embeddings (Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2013a), Bojanowski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2016)) made it possible and allowed developers to encode words as dense vectors that capture their underlying semantic content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this way, similar words are embedded close to each other in a lower-dimensional feature space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another important challenge was solved by Encoder-decoder (also called sequence-to-sequence) architectures Sutskever et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014), which made it possible to map input sequences to output sequences of different lengths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They are especially useful for complex tasks like machine translation, video captioning or question answering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This approach makes minimal assumptions on the sequence structure and can deal with different word orders and active, as well as passive voice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A definitely significant state-of-the-art technique is Attention Bahdanau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014), which enables models to actively shift their focus – just like humans do.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It allows following one thought at a time while suppressing information irrelevant to the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As a consequence, it has been shown to significantly improve performance for tasks like machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' By giving the decoder access to directly look at the source, the bottleneck is avoided and at the same time, it provides a shortcut to faraway states and thus helps with the vanishing gradient problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One of the most recent sequence data modeling techniques is Transformers (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017b)), which are solely based on attention and do not have to process the input data sequentially (like RNNs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore, the deep learning model is better in remembering context-induced earlier in long sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is the dominant paradigm in NLP currently and even makes better use of GPUs, because it can perform parallel operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Transformer architectures like BERT (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2018b), T5 (Raffel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019a) or GPT-3 (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020) are pre-trained on a large corpus and can be fine-tuned for specific language tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They have the capability to generate stories, poems, code and much more.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With the help of the aforementioned 7 8 2 Introducing the modalities breakthroughs, deep networks have been successful in retrieving information and finding representations of semantics in the modality text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the next paragraphs, developments for another modality image are going to be presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Computer vision (CV) focuses on replicating parts of the complexity of the human visual system and enabling computers to identify and process objects in images and videos in the same way that humans do.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In recent years it has become one of the main and widely applied fields of computer science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, there are still problems that are current research topics, whose solutions depend on the research’s view on the topic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One of the problems is how to optimize deep convolutional neural networks for image classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The accuracy of classification depends on width, depth and image resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One way to address the degradation of training accuracy is by introducing a deep residual learning framework (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On the other hand, another less common method is to scale up ConvNets, to achieve better accuracy is by scaling up image resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Based on this observation, there was proposed a simple yet effective compound scaling method, called EfficientNets (Tan and Le, 2019a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another state-of-the-art trend in computer vision is learning effective visual representations without human supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Discriminative approaches based on contrastive learning in the latent space have recently shown great promise, achieving state-of-the-art results, but the simple framework for contrastive learning of visual representations, which is called SimCLR, outperforms pre- vious work (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, another research proposes as an alternative a simple “swapped” prediction problem where we predict the code of a view from the representation of another view.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Where features are learned by Swapping Assignments between multiple Views of the same image (SwAV) (Caron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Further recent contrastive methods are trained by reducing the distance between representations of different augmented views of the same image (‘positive pairs’) and increasing the distance between representations of augmented views from different images (‘negative pairs’).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Bootstrap Your Own Latent (BYOL) is a new algorithm for self-supervised learning of image representatios (Grill et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Self-attention-based architectures, in particular, Transformers have become the model of choice in natural language processing (NLP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Inspired by NLP successes, multiple works try combining CNN-like architectures with self- attention, some replacing the convolutions entirely.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The latter models, while theoretically efficient, have not yet been scaled effectively on modern hardware accelerators due to the use of specialized attention patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Inspired by the Transformer scaling successes in NLP, one of the experiments is applying a standard Transformer directly to the image (Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Due to the widespread application of computer vision, these problems differ and are constantly being at the center of attention of more and more research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With the rapid development in NLP and CV in recent years, it was just a question of time to merge both modalities to tackle multi-modal tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 State-of-the-art in NLP 9 release of DALL-E 2 just hints at what one can expect from this merge in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' DALL-E 2 is able to create photorealistic images or even art from any given text input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So it takes the information of one modality and turns it into another modality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It needs multi-modal datasets to make this possible, which are still relatively rare.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This shows the importance of available data and the ability to use it even more.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Nevertheless, all modalities are in need of huge datasets to pre-train their models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It’s common to pre-train a model and fine-tune it afterwards for a specific task on another dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example, every state-of-the-art CV model uses a classifier pre-trained on an ImageNet based dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The cardinality of the datasets used for CV is immense, but the datasets used for NLP are of a completely different magnitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' BERT uses the English Wikipedia and the Bookscorpus to pre-train the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The latter consists of almost 1 billion words and 74 million sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The pre-training of GPT-3 is composed of five huge corpora: CommonCrawl, Books1 and Books2, Wikipedia and WebText2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Unlike language model pre-training that can leverage tremendous natural language data, vision-language tasks require high-quality image descriptions that are hard to obtain for free.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Widely used pre-training datasets for VL-PTM are Microsoft Common Objects in Context (COCO), Visual Genome (VG), Conceptual Captions (CC), Flickr30k, LAION-400M and LAION-5B, which is now the biggest openly accessible image-text dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Besides the importance of pre-training data, there must also be a way to test or compare the different models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A reasonable approach is to compare the performance on specific tasks, which is called benchmarking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A nice fea- ture of benchmarks is that they allow us to compare the models to a human baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Different metrics are used to compare the performance of the mod- els.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Accuracy is widely used, but there are also some others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For CV the most common benchmark datasets are ImageNet, ImageNetReaL, CIFAR- 10(0), OXFORD-IIIT PET, OXFORD Flower 102, COCO and Visual Task Adaptation Benchmark (VTAB).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The most common benchmarks for NLP are General Language Understanding Evaluation (GLUE), SuperGLUE, SQuAD 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1, SQuAD 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0, SWAG, RACE, ReCoRD, and CoNLL-2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VTAB, GLUE and SuperGLUE also provide a public leader board.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Cross-modal tasks such as Visual Question Answering (VQA), Visual Commonsense Reasoning (VCR), Natural Language Visual Reasoning (NLVR), Flickr30K, COCO and Visual Entailment are common benchmarks for VL-PTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 State-of-the-art in NLP Author: Cem Akkus Supervisor: Matthias Aßenmacher 10 2 Introducing the modalities 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Introduction Natural Language Processing (NLP) exists for about 50 years, but it is more relevant than ever.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' There have been several breakthroughs in this branch of machine learning that is concerned with spoken and written language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this work, the most influential ones of the last decade are going to be pre- sented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Starting with word embeddings, which efficiently model word semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Encoder-decoder architectures represent another step forward by making mini- mal assumptions about the sequence structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Next, the attention mechanism allows human-like focus shifting to put more emphasis on more relevant parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Then, the transformer applies attention in its architecture to process the data non-sequentially, which boosts the performance on language tasks to exceptional levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At last, the most influential transformer architectures are recognized before a few current topics in natural language processing are discussed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Word Embeddings As mentioned in the introduction, one of the earlier advances in NLP is learning word internal representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Before that, a big problem with text modelling was its messiness, while machine learning algorithms undoubtedly prefer structured and well-defined fixed-length inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On a granular level, the models rather work with numerical than textual data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Thus, by using very basic techniques like one-hot encoding or bag-of-words, a text is converted into its equivalent vector of numbers without losing information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the example depicting one-hot encoding (see Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1), there are ten simple words and the dark squares indicate the only index with a non-zero value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1: Ten one-hot encoded words (Source: Pilehvar and Camacho- Collados (2021)) In contrast, there are multiple non-zero values while using bag-of-words, which is another way of extracting features from text to use in modelling where we measure if a word is present from a vocabulary of known words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is called {basket:1, desk fork:2, desk:3, desks cloud:4, plate:5, plate rabbit:6, desks:7, tree:8, table:9, lion:10) table2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 State-of-the-art in NLP 11 bag-of-words because the order is disregarded here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Treating words as atomic units has some plausible reasons, like robustness and simplicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It was even argued that simple models on a huge amount of data outperform complex models trained on less data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, simple techniques are problematic for many tasks, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' when it comes to relevant in-domain data for automatic speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The size of high-quality transcribed speech data is often limited to just millions of words, so simply scaling up simpler models is not possible in certain situations and therefore more advanced techniques are needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Additionally, thanks to the progress of machine learning techniques, it is realistic to train more complex models on massive amounts of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Logically, more complex models generally outperform basic ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Other disadvantages of classic word representations are described by the curse of dimensionality and the generalization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The former becomes a problem due to the growing vocabulary equivalently increasing the feature size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This results in sparse and high-dimensional vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The latter occurs because the similarity between words is not captured.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore, previously learned information cannot be used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Besides, assigning a distinct vector to each word is a limitation, which becomes especially obvious for languages with large vocabularies and many rare words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To combat the downfalls of simple word representations, word embeddings enable to use efficient and dense representations in which similar words have a similar encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So words that are closer in the vector space are expected to be similar in meaning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An embedding is hereby defined as a vector of floating point values (with the length of the vector being a hyperparameter).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The values for the embedding are trainable parameters which are learned similarly to a model learning the weights for a dense layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The dimensionality of the word representations is typically much smaller than the number of words in the dictionary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example, Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2013a) called dimensions between 50-100 modest for more than a few hundred million words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For small data sets, dimensionality for the word vectors could start at 8 and go up to 1024 for larger data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is expected that higher dimensions can rather pick up intricate relationships between words if given enough data to learn from.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For any NLP tasks, it is sensible to start with word embeddings because it allows to conveniently incorporate prior knowledge into the model and can be seen as a basic form of transfer learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is important to note that even though embeddings attempt to represent the meaning of words and do that to an extent, the semantics of the word in a given context cannot be captured.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is due to the words having static precomputed representations in traditional embedding techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Thus, the word "bank" can either refer to a financial institution or a river bank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Contex- 12 2 Introducing the modalities FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2: Three-dimensional word embeddings (Source: Pilehvar and Camacho-Collados (2021)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' tual embedding methods offer a solution, but more about them will follow later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It should be noted that words can have various degrees of similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the context of inflectional languages, it becomes obvious because words are adjusted to articulate grammatical categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example, in a subspace of the original vector, nouns that have similar endings can be found.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, it even exceeds simple syntactic regularities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With straightforward operations on the word vectors, it can be displayed that vector(King)−vector(Man)+vector(Woman) equals a vector that is closest in vector space (and therefore in meaning) to the word "Queen".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A simple visualization of this relationship can be seen in the left graph below (see Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The three coordinate systems are representations of higher dimensions that are depicted in this way via dimension reduction techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Furthermore, the verb-to-tense relationship is expressed in the middle graphic, which extends the insight from before referring to the word endings being similar because in this instance the past tenses of both verbs walking and swimming are not similar in structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Additionally, on the right side of the figure, there is a form of the commonly portrayed and easily understood Country-Capital example (see Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2013a)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3: Three types of similarities as word embeddings (Source: Google (2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' desk desks desks desk plate plate table tableItaly Spain Canada man walked Turkey woman Rome O Ottawa Madrid Germany king swam Russia walking Ankara 0 queen Berlin Moscow Japan vietnam swimming China Tokyo Hanoi Beijing Male-Female Verb Tense Country-Capital2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 State-of-the-art in NLP 13 Another way of using vector representations of words is in the field of transla- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It has been presented that relations can be drawn from feature spaces of different languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In below, the distributed word representations of numbers between English and Spanish are compared.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this case, the same numbers have similar geometric arrangements, which suggests that mapping linearly between vector spaces of languages is feasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Applying this simple method for a larger set of translations in English and Spanish led to remarkable results achieving almost 90 % precision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4: Representations of numbers in English and Spanish (Source: Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2013c)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This technique was then used for other experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One use case is the detection of dictionary errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Taking translations from a dictionary and computing their geometric distance returns a confidence measure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Closely evaluating the translations with low confidence and outputting an alternative (one that is closest in vector space) results in a plain way to assess dictionary translations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Furthermore, training the word embeddings on a large corpora makes it possible to give sensible out-of-dictionary predictions for words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This was tested by randomly removing a part of the vocabulary before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Taking a look at the predictions revealed that they were often to some extent related to the translations with regard to meaning and semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Despite the accomplishments in other tasks, translations between distant languages exposed shortcomings of word embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example, the accuracy for translations between English and Vietnamese seemed signif- icantly lower.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This can be ascribed to both languages not having a good one-to-one correspondence because the concept of a word is different than in English.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In addition, the used Vietnamese model contains numerous syn- onyms, which complicates making exact predictions (see Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2013c)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Turning the attention to one of the most impactful embedding techniques, word2vec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It was proposed by Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2013a) and is not a singular algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It can rather be seen as a family of model architectures and op- timizations to learn word representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Word2vec’s popularity also stems from its success on multiple downstream natural language processing tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 O cuatro (four) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='15 O four 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Ouno (one) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='05 oF O cinco (five) O five Oone oF 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 F Otres (three) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 O three 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 O dos (two) O two 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='25 0-8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 14 2 Introducing the modalities It has a very simple structure which is based on a basic feed forward neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They published multiple papers (see Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2013a)], Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2013c), Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2013d)) that are stemming around two different but related methods for learning word embeddings (see Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Firstly, the Continuous bag-of-words model aims to predict the middle word based on surrounding context words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Hence, it considers components before and after the target word.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As the order of words in the context is not relevant, it is called a bag-of-words model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Secondly, the Continuous skip-gram model only considers the current word and predicts others within a range before and after it in the same sentence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Both of the models use a softmax classifier for the output layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5: CBOW and Skip-gram architecture (Source: Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2013a)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Then, Bojanowski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2016) built on skip-gram models by accounting for the morphology (internal structure) of words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A different classical embedding architecture that has to be at least mentioned is the GloVe model, which does not use a neural network but incorporates local context information with global co-occurrence statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Encoder-Decoder The field of natural language processing is concerned with a variety of different tasks surrounding text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Depending on the type of NLP problem, the network may be confronted with variable length sequences as input and/or output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is the case for many compelling applications, such as question answering, dialogue systems or machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the following, many examples will explore machine translations in more detail, since it is a major problem domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Regarding translation tasks, it becomes obvious that input sequences need to be mapped to output sequences of different lengths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To manage this INPUT PROJECTION OUTPUT INPUT PROJECTION OUTPUT w(t-2) w(t-2) w(t-1) w(t-1) SUM w(t) w(t) w(t+1) w(t+1) w(t+2) w(t+2) CBOW Skip-gram2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 State-of-the-art in NLP 15 type of input and output, a design with two main parts could be useful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The first one is called the encoder because, in this part of the network, a variable length input sequence is transformed into a fixed state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Next, the second component called the decoder maps the encoded state to an output of a variable length sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As a whole, it is known as an encoder-decoder or sequence-to-sequence architecture and has become an effective and standard approach for many applications which even recurrent neural networks with gated hidden units have trouble solving successfully.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Deep RNNs may have a chance, but different architectures like encoder-decoder have proven to be the most effective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It can even deal with different word orders and active, as well as passive voice (Sutskever et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A simplified example of the encoder-decoder model can be seen in 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6: Translation through simplified seq2seq model (Source: Manning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Before going through the equations quantifying the concepts, it makes sense to examine the sequence-to-sequence design proposed by Cho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An encoder-RNN processes the input sequence of length nx and computes a fixed-length context vector C, which is usually the final hidden state of the encoder or a simple function of the hidden states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' After the input sequence is processed, it is added to the hidden state and passed forward in time through the recurrent connections between the hidden states in the encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Despite the context vector usually being a simple function of the last hidden state, its role cannot be underestimated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Specifically, the encoded state summarizes important information from the input sequence, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the intent in a question answering task or the meaning of a text in the case of machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' After the context is passed to every hidden state of the decoder, the decoder RNN uses this information to produce the target sequence of length ny, which can of course vary from nx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At the latest through the above illustration, it is clear that the decoder is particularly interesting to look at in the form of equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The notation mainly follows Cho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The decoder is another type of RNN which is trained to predict the target based on the hidden state at the last time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, unlike regular RNNs, it is also conditioned on the output of the last time step (yt−1) and a summary of the input c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore, the hidden state of the decoder is computed by: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Iamastudent- Encoder Decoder >Je suis étudiant 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='216 2 Introducing the modalities FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7: Encoder-decoder architecture (Source: Cho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' h[t] d = f(h[t−1] d , y[t−1], c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Similarly, each conditional probability is given by the following, where f is a non-linear activation function (and must produce probabilities in , e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the softmax function): P(y[t]|y[1], .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' , y[t−1], c) = f(h[t] d , y[t−1], c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The two parts are jointly trained to maximize the conditional log-likelihood, where θ denotes the set of model parameters and (xn, yn) is an (input sequence, output sequence) pair from the training set with size N: max θ 1 N N � n=1 log pθ(yn|xn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The best probability is usually found by using the beam search algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The core idea of it is that on each step of the decoder, we keep track of the k most probable partial translations (which are called hypotheses).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Examining the translation presented in with hidden units unrolled through time could look like in 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In particular, multiple hidden layers are recommended by the researchers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The idea is that lower layers compute lower-level features and higher layers compute higher-level features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Gated recurrent networks, especially long short-term memory networks, have Decoder yt y1 C X1 X2 XT Encoder2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 State-of-the-art in NLP 17 FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8: Translation through seq2seq model (Source: Manning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' been found to be effective in both components of the sequence-to-sequence architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Furthermore, it was revealed that deep LSTMs significantly outperform shallow LSTMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Each additional layer reduced perplexity by nearly 10%, possibly due to their much larger hidden state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example, Sutskever et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014) used deep LSTMs with 4 layers and 1000 cells at each layer for 1000-dimensional word embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Thus, in total, 8000 real numbers are used to represent a sentence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For simplification, the neural networks are in the following referred to as RNNs which is not contradicting the insights of this paragraph as LSTMs are a type of gated RNNS (Sutskever et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Attention Although encoder-decoder architectures simplified dealing with variable length sequences, they also caused complications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Due to their design, the encoding of the source sentence is a single vector representation (context vector).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The problem is that this state must compress all information about the source sentence in a single vector and is commonly referred to as the bottleneck problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To be precise, the entire semantics of arbitrarily long sentences need to be wrapped into a single hidden state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Moreover, it constitutes a different learning problem because the information needs to be passed between numerous time steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This leads to vanishing gradients within the network as a consequence of factors less than 1 multiplied with each other at every point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To illustrate, the last sentence is an ideal example of one in which an encoder-decoder approach could have difficulty coping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In particular, if the sentences are longer than the ones in the training corpus (Manning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' target output words Je suisé étudiant sslayer softmaxlayer initial (zero) hidden layer 2 hidden layer 1 embeddinglayer am a student Je suis étudiant source input words target input words18 2 Introducing the modalities Due to the aforementioned reasons, an extension to the sequence-to-sequence architecture was proposed by Bahdanau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014), which learns to align and translate jointly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For every generated word, the model scans through some positions in the source sentence where the most relevant information is located.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Afterwards, based on the context around and the previously generated words, the model predicts the target word for the current time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This approach is called attention, as it emulates human-like (cognitive) attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As a result of directly looking at the source and bypassing the bottleneck, it provides a solution to the problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Then, it mitigates the vanishing gradient problem, since there is now a shortcut to faraway states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Consequently, incorporating the attention mechanism has been shown to considerably boost the performance of models on NLP tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A walkthrough of the example below should resolve any outstanding questions regarding the procedure of the attention mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The source sentence is seen on the bottom left, which is given in French and acts as the input for the encoder-RNN (in red).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Then, the attention scores (in blue) are computed by taking the dot product between the previous output word and input words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Next, the softmax function turns the scores into a probability distribution (in pink).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They are used to take a weighted sum of the encoder’s hidden states and form the attention output, which mostly contains information from the hidden states that received high attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Afterwards, the attention output is concatenated with the decoder hidden state (in green), which is applied to compute the decoder output as before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In some scenarios, the attention output is also fed into the decoder (along with the usual decoder input).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This specific example was chosen because "entarter" means "to hit someone with a pie" and is therefore a word that needs to be translated with many words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As a consequence of no existing direct equivalents for this phrase, it is expected that there is not only one nearly non-zero score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this snapshot, the attention distribution can be seen to have two significant contributors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The following equations aim to compactly represent the relations brought forward in the last paragraphs and mainly follow Manning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The attention scores e[t] are computed by scalarly combining the hidden state of the decoder with all of the hidden states of the encoder: e[t] = [(h[t] d )T h[1] e , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' , (h[t] d )T h[N] e ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Besides the basic dot-product attention, there are also other ways to calculate the attention scores, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' through multiplicative or additive attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Although they will not be further discussed at this point, it makes sense to at least mention them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Then, applying the softmax to the scalar scores results in the attention distribution α[t], a probability distribution whose values sum up to 1: 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 State-of-the-art in NLP 19 FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9: Translation process with attention mechanism (Source: Man- ning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' α[t] = softmax(e[t]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Next, the attention output a[t] is obtained by the attention distribution acting as a weight for the encoder hidden states: a[t] = N � i=1 α[t] i he,i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Concatenating attention output with decoder hidden state and proceeding as in the non-attention sequence-to-sequence model are the final steps: o[t] = f(a[t]h[t] d ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' By visualizing the attention distribution, also called alignments (see Bahdanau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014)), it is easy to observe what the decoder was focusing on and understand why it chose a specific translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The x-axis of the plot of below corresponds to the words in the source sentence (English) and the y-axis to the words in the generated translation (French).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Each pixel shows the weight of the source word for the respective target word in grayscale, where 0 is black and 1 is white.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As a result, which positions in the source sentence were more relevant when generating the target word becomes apparent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As expected, the alignment between English and French is largely monotonic, as the pixels are brighter, and therefore the weights are higher along the main diagonal of the matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=" However, there is an exception because adjectives and nouns Attention me output distribution Attention J3 Attention scores Decoder RNN Encoder 目目目目 RNN il m' entarté he hit a Source sentence (input)20 2 Introducing the modalities are typically ordered differently between the two languages." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Thus, the model (correctly) translated "European Economic Area" into "zone économique européene".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' By jumping over two words ("European" and "Economic"), it aligned "zone" with "area".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Then, it looked one word back twice to perfect the phrase "zone économique européene".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Additional qualitative analysis has shown that the model alignments are predominantly analogous to our intuition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='10: Attention alignments (Source: Bahdanau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 Transformer For this section, Manning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022) constitutes the main source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' RNNs are unrolled from one side to the other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Thus, from left to right and right to left.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This encodes linear locality, which is a useful heuristic because nearby words often affect each other’s meaning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But how is it when distant words need to interact with each other?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For instance, if we mention a person at the beginning of a text portion and refer back to them only at the very end, the whole text in between needs to be tracked back (see below).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Hence, RNNs take O(sequence length) steps for distant word pairs to interact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Due to gradient problems, it is therefore hard to learn long-distance dependencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In addition, the linear order is ingrained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Even though, as known, the sequential structure does not tell the whole story.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' GPUs can perform multiple calculations simultaneously and could help to reduce the execution time of the deep learning algorithm massively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, forward and backward passes lack parallelizability in recurrent models and have O(sequence length).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To be precise, future hidden states cannot be computed in full before past states have been computed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=" This inhibits training on massive agreement uropean Economic signed August end> Area 992 e e wa S uo m 9 E S L' accord sur la zone économique europeenne e été signé en aout 1992 2." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 State-of-the-art in NLP 21 FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='11: Sequential processing of recurrent model (Source: Manning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' indicates the minimum number of steps before the respective state can be calculated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='12: Sequential processing of recurrent model with number of steps indicated (Source: Manning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' After proving that attention dramatically increases performance, google researchers took it further and based transformers solely on attention, so without any RNNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For this reason, the paper in which they were introduced is called "Attention is all you need".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Spoiler: It is not quite all we need, but more about that on the following pages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Transformers have achieved great results on multiple settings such as machine translation and document generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Their parallelizability allows for efficient pretraining and leads them to be the standard model architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In fact, all top models on the popular aggregate benchmark GLUE are pretrained and Transformer-based.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Moreover, they have even shown promise outside of NLP, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' in Image Classification, Protein Folding and ML for Systems (see Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020a), Jumper et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021), Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020), respectively).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' If recurrence has its flaws, another adjustment of the attention mechanism might be beneficial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Until now, it was defined from decoder to encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Alternatively, attention could also be from one state to all states in the same set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is the definition of self-attention, which is encoder-encoder or decoder-decoder attention (instead of encoder-decoder) and represents a cornerstone of the transformer architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' depicts this process in which each word attends to all words in the previous layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Even though in practice, most O(sequence length) The chef who ate1→2→3 0 1 2 h1 h2 ht22 2 Introducing the modalities arrows are omitted eventually.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='13: Connections of classic attention mechanism (Source: Manning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Thinking of self-attention as an approximate hash table eases understanding its intuition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To look up a value, queries are compared against keys in a table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In a hash table, which is shown on the left side of , there is exactly one key-value pair for each query (hash).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In contrast, in self-attention, each key is matched to varying degrees by each query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Thus, a sum of values weighted by the query-key match is returned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='14: Comparison of classic attention mechanism with self-attention with hash tables (Source: Manning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The process briefly described in the last paragraph can be summarized by the following steps that mainly follow Manning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Firstly, deriving query, key, and value for each word xi is necessary: qi = W Qxi, ki = W Kxi, vi = W V xi Secondly, the attention scores have to be calculated: eij = qikj 2 2 2 2 2 2 2 2 attention attention embedding 0 0 0 0 0 0 0 0 h1 h, hko Vo ko Vo k1 V1 k1 V1 k2 V2 k2 V2 q k3 V3 q k3 V3 k4 V4 k4 V4 ks Vs ks Vs kg V6 k6 V6 kz V7 k7 V72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 State-of-the-art in NLP 23 Thirdly, to normalize the attention scores, the softmax function is applied: αij = softmax(eij) = exp(eij) � k eij Lastly, taking the weighted sum of the values results in obtaining the attention output: ai = � j αijvj Multiple advantages of incorporating self-attention instead of recurrences have been revealed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Since all words interact at every layer, the maximum interaction distance is O(1) and is a crucial upgrade.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In addition, the model is deeply bidirectional because each word attends to the context in both directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As a result of these advances, all word representations per layer can be computed in parallel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Nevertheless, some issues have to be discussed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Attention does no more than weighted averaging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So without neural networks, there are no element-wise non-linearities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Their importance cannot be understated and shows why attention is not actually all that is needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Furthermore, bidirectionality is not always desired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In language modelling, the model should specifically be not allowed to simply look ahead and observe more than the objective allows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Moreover, the word order is no longer encoder, and it is bag-of-words once again.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Fortunately, the previously mentioned weaknesses have been addressed for the original transformer-architecture proposed by Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The first problem can be easily fixed by applying a feed forward layer to the output of attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It provides non-linear activation as well as extra expressive power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Then, for cases in which bidirectionality contradicts the learning objective, future states can be masked so that attention is restricted to previous states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Moreover, the loss of the word can be corrected by adding position representations to the inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The more complex deep learning models are, the closer they become to model the complexity of the real world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' That is why the transformer encoder and decoder consist of many layers of self-attention with a feed forward network, which is necessary to extract both syntactic and semantic features from sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Otherwise, using word embeddings, which are semantically deep representations between words, would be unnecessary (Sejnowski, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At the same time, training deep networks can be troublesome.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore, some tricks are applied to help with the training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One of them is to pass the "raw" embeddings directly to the next layer, which 24 2 Introducing the modalities prevents forgetting or misrepresent important information as it is passed through many layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This process is called residual connections and is also believed to smoothen the loss landscape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Additionally, it is problematic to train the parameters of a given layer when its inputs keep shifting because of layers beneath.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Reducing uninformative variation by normalizing within each layer to mean zero and standard deviation to one weakens this effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another challenge is caused by the dot product tending to take on extreme values because of the variance scaling with increasing dimensionality dk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is solved by Scaled Dot Product Attention (see Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='15), which consists of computing the dot products of the query with its keys, dividing them by the dimension of keys √dk, and applying the softmax function next to receive the weights of the values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='15: Scaled dot-product attention (Source: Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017c)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Attention learns where to search for relevant information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Surely, attending to different types of information in a sentence at once delivers even more promising results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To implement this, the idea is to have multiple attention heads per layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' While one attention head might learn to attend to tense information, another might learn to attend to relevant topics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Thus, each head focuses on separate features, and construct value vectors differently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Multi-headed self-attention is implemented by simply creating n independent attention mechanisms and combining their outputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At this point, every part that constitutes the encoder in the transformer architecture has been introduced (see Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='17).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' First, positional encodings are included in the input embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' There are multiple options to realize this step, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' through sinusoids.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The multi-head attention follows, which was just mentioned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' "Add & Norm" stands for the residual connections and the normalization layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A feed forward network follows, which is also accompanied by residual connections and a normalization layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' All of it is repeated n times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For the decoder, the individual components are similar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One difference is that the outputs go through masked multi-head attention before multi-head attention and the feed forward network (with residual connections and layer normalization).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is critical to ensure that the decoder cannot peek at the MatMul SoftMax 个 Mask (opt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=') 个 Scale MatMul 个个 Q K V2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 State-of-the-art in NLP 25 FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='16: Multi-head attention (Source: Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017c)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To execute this, the set of keys and queries could be modified at every time step to only include past words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, it would be very inefficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Instead, to enable parallelization, future states are masked by setting the attention scores to −∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' After the decoder process is also repeated n times, a linear layer is added to project the embeddings into a larger vector that has the length of the vocabulary size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At last, a softmax layer generates a probability distribution over the possible words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='17: Transformer architecture (Source: Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017c)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Linear Concat 个个 Scaled Dot-Product h Attention V K QOutput Probabilities Softmax Linear Add & Norm Feed Forward Add & Norm Add & Norm Multi-Head Feed Attention Forward Nx Add & Norm Nx Add & Norm Masked Multi-Head Multi-Head Attention Attention Positional Positional Encoding + Encoding Input Output Embedding Embedding Inputs Outputs (shifted right)26 2 Introducing the modalities 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 Transformer architectures: BERT, T5, GPT-3 "You shall know a word by the company it keeps", an adage by linguist John Rupert Firth from 1957 goes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Even earlier, in 1935, he stated that ".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the complete meaning of a word is always contextual, and no study of meaning apart from a complete context can be taken seriously".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The quotes of the famous linguist sum up the motivation to learn word meaning and context perfectly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Many years later, in 2017, pretraining word embeddings started.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, some complications arise from solely pretraining the first part of the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For instance, to teach the model all contextual aspects of language, the training data for the downstream task (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' question answering) needs to be adequate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Additionally, most of the parameters are usually randomly initialized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' presents the network discussed, in which the word "movie" gets the same embedding irrespective of the sentence it appears in.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On the contrary, parameters in modern NLP architectures are initialized via pretraining (see Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='18).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Furthermore, during the pretraining, certain input parts are hidden to train the model to reconstruct them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This leads to building suitable parameter initializations and robust probability distributions over language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='18: Partly pre-trained model (Source: Manning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Classic machine learning does not match human learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Specifically referring to training a model from scratch, and only being able to learn from the training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In contrast, human beings already have prior knowledge they can apply to new tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Transfer learning emulates this by using an already trained network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The main idea is to use a model that was pretrained on a hard, general language understanding task using endless amounts of data, so that, it eventually con- tains the best possible approximation of language understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Afterwards, the training data for the new task is applied to slightly modify the weights of Not pretrained pretrained (word embeddings) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='. the movie was .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='.2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 State-of-the-art in NLP 27 FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='19: Jointly pre-trained model (Source: Manning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the pretrained model, which is referred to as fine-tuning (Manning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The specific architecture of a transformer model affects the type of pre-training, and favourable use cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the following, three different but very influential transformer architectures will be discussed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' BERT can be seen as stacked encoders (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2018b), T5 aims to combine the good parts of encoders and decoders (Raffel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019a), while GPT are stacked decoders (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 BERT Transfer learning led to state-of-the-art results in natural language processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One of the architectures that led the way was BERT, which stands for Bidi- rectional Encoder Representations from Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It receives bidirectional context, which is why it is not a natural fit for language modelling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To train it on this objective regardless, masked language modelling was proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The main idea is to cover up a fraction of the input words and let the model predict them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this way, the LM objective can be used while sustaining connections to words in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The masked LM for BERT randomly predicts 15% of all word tokens in each sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Of those, 80% are replaced by the MASK token, 10% by a random token, and 10% remain unchanged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Moreover, because the masked words are not even seen in the fine-tuning phase, the model cannot get complacent and relies on strong representations of non-masked words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Initially, BERT had an additional objective of whether one sentence follows another, which is known as next sentence predic- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, it was dropped in later work due to having an insignificant effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Pretrained jointly .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the movie was .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='.28 2 Introducing the modalities BERT is hugely versatile and was greatly popular after its release.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Fine-tuning BERT led to outstanding results on a variety of applications, including question answering, sentiment analysis and text summarization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Thanks to its design, if the task involves generating sequences, pretrained decoders outperform pretrained encoders like BERT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Even though, it would not be recommended for autoregressive generation, up to this day, "small" models like BERT are applied as general tools for numerous tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 T5 The Text-To-Text Transfer Transformer (T5) is a new model that can be regarded as an application of the insights gathered by an extensive empirical study searching for the best transfer learning techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is pretrained on Colossal Clean Crawled Corpus (C4), an open-source dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Raffel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019a) found that the best pretraining objective to use for the encoder component was span corruption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In short, different length word groups (spans) are replaced with unique placeholders, and let the model decode them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Text preprocessing is necessary for its implementation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For the decoder, it is still a language modelling task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Compared to models like BERT, which can only output a span of the input or a class label, T5 reframes all NLP tasks into a unified text-to-text format, where inputs and outputs always consist of text strings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As a result, the same model, loss function, and hyperparameters can be used on any NLP task, such as machine translation, document summarization, question answering, and classification tasks like sentiment analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' T5 can even be applied to regression tasks by training it to predict the string representation of a number (and not the number itself).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Examples of potential use cases are depicted in below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='20: Applications of T5 model (Source: Raffel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019a)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 GPT-3 As previously stated, the neural architecture influences the type of pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The original GPT architecture consists of a Transformer decoder with 12 layers (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For decoders, it is sensible to simply pretrain them as language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Afterwards, they can be used as generators to fine-tune their probability of predicting the next word conditioned on the previous words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' translate English to German: That is good.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' "cola sentence: The "Das ist gut.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' course is jumping well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=" T5 'not acceptable' 'stsb sentence1: The rhino grazed on the grass." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' sentence2: A rhino is grazing in a field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='" "3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8" "summarize: state authorities "six people hospitalized after dispatched emergency crews tuesday to a storm in attala county.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' survey the damage after an onslaught of severe weather in mississippi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 State-of-the-art in NLP 29 The models are suitable for tasks similar to the training, including any type of dialogue and document summarization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Transformer language models are great for transfer learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They are fine-tuned by randomly initializing a softmax classifier on top of the pretrained model and training both (with only a very small learning rate and a small number of epochs) so that the gradient propagates through the whole network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The success of BERT in 2018 prompted a "gold rush" in NLP, in which ever greater language models were created.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One that topped the headlines and used a customer supercluster for computation was the third iteration of the GPT architecture by OpenAI, known as GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' reveals why GPT-3 is a famous example of current research focusing on scaling up neural language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' While the largest T5 model has 11 billion parameters, GPT-3 has 175 billion parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Moreover, the training data set contains around 500 billion tokens of text, while the average young american child hears around 6 million words per year (Hart and Risley, 1995).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The results of huge language models suggest that they perform some form of learning (without gradient steps) simply from examples provided via context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The tasks are specified by the in-context examples, and the conditional probability distribution simulates performing the task to an extent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='21: Comparison of number of parameters between Transformer- architectures (Source: Saifee (2020)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 Current Topics 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Concerns regarding growing size of Language Models As the last chapter ended with GPT-3 and emphasized the concerning trend of ever larger language models, one could ask which other costs arise from the developments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Risks and harms among environmental and financial costs have been studied by Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They state that marginalized 180Chart Area 160 140 Parameters 120 100 80 60 # 40 20 0 BERT RoBERTa GPT-2 T5 Turing NLG GPT-3 Model30 2 Introducing the modalities communities are not only less likely to benefit from LM progress, but also more likely to suffer from the environmental repercussions of increasing resource consumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Strubell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019a) estimated that training a Transformer (big) model resulted in 249t of CO2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To compare, an average human is responsible for approximately 5t of CO2 per year (Ritchie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In addition, they discovered that an estimated increase of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 in BLEU score increased computation costs by $ 150,000 (for English to German translations).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Furthermore, larger models require more data to sufficiently train them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This has resulted in large but poorly documented training data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Multiple risks can be mitigated if there is a common understanding of the model’s learnings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Moreover, it has been argued that datasets consisting of web data over-represent hegemonic views and encode bias towards marginalized communities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is among other factors due to internet access being unevenly distributed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In particular, there is an over-representation of younger internet users and those from developed countries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is generally naive to educate AI systems on all aspects of the complex world, and hope for the beautiful to prevail (Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Improving Understanding of Transformer-based models The results of transformer-based models clearly show that they deliver successful results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, it is less clear why.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The size of the models makes it difficult to experiment with them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Nevertheless, having a limited understanding restrains researchers from coming up with further improvements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore, multiple papers analysed BERT’s attention in search of an improved understanding of large transformer models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' BERT is a smaller model out of the more popular ones, and its attention is naturally interpretable because the attention weight indicates how significant a word is for the next representation of the current word (Clark et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the following, some of the findings are going to be shared.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' BERT representations are rather hierarchical than linear, and they include information about parts of speech, syntactic chunks and roles (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019, Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019a)) Furthermore, it has semantic knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example, BERT can recognize e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' that "to tip a chef" is better than "to tip a robin" but worse than "to tip a waiter" ((Ettinger, 2019)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, it makes sense that BERT has issues with knowledge that is assumed and not mentioned, which especially refers to visual and perceptual properties (Da and Kasai, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Additionally, BERT struggles with inferences, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' even though it is known that "people walk into houses" and "houses are big", it cannot infer that "houses are bigger than people" (Forbes et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' While it is true that different transformer heads attend to various patterns (see 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 State-of-the-art in NLP 31 ), interestingly, most of them could be neglected without notable performance loss (Voita et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Probing attention maps can be tedious, but allows to gain knowledge of common patterns, such as an unexpected amount focusing on the delimiter token SEP .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='22: Common patterns of attention heads (Source: Clark et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Few-Shot Learning For NLP tasks, the model is usually trained on a set of labelled examples and is expected to generalize to unseen data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Annotating is not only costly but also difficult to gather for numerous languages, domains, and tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In practice, there is often only a very limited amount of labelled examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Consequently, few-shot learning is a highly relevant research area (Schick and Schütze, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It defines a model that is trained on a limited number of demonstrations to guide its predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Referring back to , the benefits of lower computational and environmental costs have to be mentioned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Traditional fine-tuning uses a large corpus of example tasks, and the model is updated repeatedly with gradient steps so that it adapts to the task with minimal accuracy error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In contrast, few-shot applications have to complete tasks at test time with only forward passes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They have three main parts: the task description, examples, and the prompt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In Figure ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', the task is a translation from English to French, a few examples, as well as the word that should be translated are given.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Moreover, zero-shot and one-shot learning refer to the model predicting with no and one learned example, respectively (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is complicated to create the few-shot examples, since the application relies on them to express the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is why smaller models are susceptible to Head 1-1 Head 3-1 Head 8-7 Head 11-6 Attends broadly Attends to next token Attends to [SEP] Attends to periods .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='. .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='. found found found found found found found found in in in- in in in in in taiwan taiwan taiwan- taiwan taiwan taiwan taiwan taiwan [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] the the the the the the the the wingspan >wingspan wingspan- wingspan wingspan wingspan wingspan wingspan is is is- is is is is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' is 24 24 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 24 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 24 24 24 28 28 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 28 28 28 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 28 mm mm mm: mm mm mm mm mm [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP]32 2 Introducing the modalities FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='23: Few-shot learning (Source: Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' examples written unfavourably.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020), it was shown that few-shot performance scales with the number of model parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Even though GPT-3’s in-context learning improved few-shot prompting capabilities, it is still sensitive to the order of training examples, decoding strategy, and hyperparameter selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' All of this combined with the fact that current research uses larger or held-out data sets leads to the suspicion that the true few-shot ability of language models is overestimated (Perez et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Moreover, Lialin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022) have found that common transformer models could not resolve compositional questions in a zero-shot fashion and that the model’s parameter count does not correlate with performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This indicates a limitation for zero-shot prompting with the existing pre-training objectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, different models provided the best accuracy with regard to different symbolic reasoning tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This suggests that optimization or masking strate- gies could be more significant than the pre-training, data set size or model architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 Summary Natural Language Processing has been one of the most exciting fields of machine learning in the last decade considering all the breakthroughs discussed in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Word embeddings made it possible and allowed developers to encode words as dense vectors that capture their underlying semantic content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this way, similar words are embedded close to each other in a lower-dimensional feature space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another important challenge was solved by encoder-decoder (also called sequence-to-sequence) architectures, which made it possible to map input sequences to output sequences of different lengths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They are especially useful for complex tasks like machine translation, video captioning or question answering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A significant state-of-the-art technique is attention, which enabled models to actively shift their focus – just like humans do.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It allows following one thought at a time while suppressing information irrelevant to the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As a consequence, it has been shown to significantly improve performance for tasks like machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' By giving the decoder access to directly look at the source, the bottleneck is avoided and at the same time, it provides a shortcut to faraway states and thus helps with the vanishing gradient problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Translate English to French: task description sea otter => loutre de mer examples 3 peppermint => menthe poivrée plush girafe => girafe peluche 5 cheese => prompt2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 State-of-the-art in Computer Vision 33 One of the most recent data modelling techniques is the transformer, which is solely based on attention and does not have to process the input data sequentially.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore, the deep learning model is better in remembering context-induced earlier in long sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is the dominant paradigm in NLP currently and makes better use of GPUs because it can perform parallel operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Transformer architectures like BERT, T5 or GPT-3 are pre-trained on a large corpus and can be fine-tuned for specific language tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They can generate stories, poems, code and much more.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Currently, there seems to be breaking transformer news nearly every week with no sign of slowing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is why many trends could be recognized as relevant current topics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One of them is increasing concerns regarding the growing size of language models and the correlated environmental and financial costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another active research aspect is concerned with improving the understanding of transformer-based models to further advance them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Additionally, there are many studies about achieving respectable results on language modelling tasks after only learning from a few examples, which is known as few-shot learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 State-of-the-art in Computer Vision Author: Vladana Djakovic Supervisor: Daniel Schalk 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 History The first research about visual perception comes from neurophysiological research performed in the 1950s and 1960s on cats.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The researchers used cats as a model to understand how human vision is compounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Scientists concluded that human vision is hierarchical and neurons detect simple features like edges followed by more complex features like shapes and even more complex visual representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Inspired by this knowledge, computer scientists focused on recreating human neurological structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At around the same time, as computers became more advanced, computer scientists worked on imitating human neurons’ behavior and simulating a hypothetical neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In his book “The Organization of Behaviour” (1949) Donald Hebbian stated that neural pathways strengthen over each successive use, especially between neurons that tend to fire at the same time, thus beginning the long journey towards quantifying the complex processes of the brain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The first Hebbian network, inspired by this neurological research, was successfully implemented at MIT in 1954 (Jaspreet, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' New findings led to the establishment of the field of artificial intelligence in 34 2 Introducing the modalities 1956 on-campus at Dartmouth College.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Scientists began to develop ideas and research how to create techniques that would imitate the human eye.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In 1959 early research on developing neural networks was performed at Stanford University, where models called “ADALINE” and “MADALINE,” (Multiple ADAptive LINear Elements) were developed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Those models aimed to recognize binary patterns and could predict the next bit (his, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Starting optimism about Computer Vision and neural networks disappeared after 1969 and the publication of the book “Perceptrons” by Marvin Minsky, founder of the MIT AI Lab, stated that the single perception approach to neural networks could not be translated effectively into multi-layered neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The period that followed was known as AI Winter, which lasted until 2010, when the technological development of computer and the internet became widely used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In 2012 breakthroughs in Computer Vision happened at the ImageNet Large Scale Visual Recognition Challenge (ILSVEC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The team from the University of Toronto issued a deep neural network called AlexNet (Krizhevsky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2012a) that changed the field of artificial intelligent and Computer Vision (CV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' AlexNet achieved an error rate of 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' From then until today, Computer Vision has been one of the fastest developing fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Researchers are competing to develop a model that would be the most similar to the human eye and help humans in their everyday life.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this chapter the author will describe only a few recent state-of-the-art models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Supervised and unsupervised learning As part of artificial intelligence (AI) and machine learning (ML), there are two basic approaches: supervised learning;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' unsupervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Supervised learning (Education, 2020a) is used to train algorithms on labeled datasets that accurately classify data or predict outcomes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With labeled data, the model can measure its accuracy and learn over time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Among others, we can distinguish between two common supervised learning problems: classification, regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In unsupervised learning (Education, 2020b), unlabelled datasets are analyzed and clustered using machine learning algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These algorithms aim to dis- cover hidden patterns or data groupings without previous human intervention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The ability to find similarities and differences in information is mainly used for three main tasks: clustering, association, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 State-of-the-art in Computer Vision 35 dimensionality reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Solving the problems where the dataset can be both labeled and unlabeled requires a semi-supervised approach that lies between supervised and unsuper- vised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is useful when extracting relevant features from complex and high volume data, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', medical images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Nowadays, a new research topic appeared in the machine learning community, Self-Supervised Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Self-Supervised learning is a process where the model trains itself to learn one part of the input from another (techslang, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As a subset of unsupervised learning, it involves machines labeling, categorizing, and analyzing information independently and drawing conclusions based on connections and correlations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It can also be considered as an autonomous form of supervised learning since it does not require human input to label data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Unlike unsupervised learning, self-supervised learning does not focus on clustering nor grouping (Shah, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One part of Self-Supervised learning is contrastive learning, which is used to learn the general features of an unlabeled dataset identifying similar and dissimilar data points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is utilized to train the model to learn about our data without any annotations or labels (Tiu, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Scaling networks Ever since the introduction of AlexNet in 2012, the problem of scaling convo- lutional neural networks (ConvNet) has become the topic of active research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ConvNet can be scaled in all three dimensions: depth, width, or image size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One of the first researches in 2015 showed that network depth is crucial for image classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The question whether stacking more layers enables the network to learn better leads to deep residual networks called ResNet (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015), which will be described in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Later on, scaling networks by their depth became the most popular way to improve their performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The second solution was to scale ConvNets by their width.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Wider networks tend to be able to capture more fine-grained features and are easier to train (Zagoruyko and Komodakis, 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Lastly, scaling the image’s resolution can improve the network’s performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With higher resolution input images, ConvNets could capture more fine-grained patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' GPipe (Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2018) is one of the most famous networks created by this technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The question of possibility of scaling by all three dimensions was answered by Tan and Le (2019a) in the work presenting Efficient Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This network was built by scaling up ConvNets by all three dimensions and will also be described here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Deep residual networks The deep residual networks, called ResNets (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015), were presented as the answer on the question whether stacking more layers would enable network to learn better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Until then one obstacle for simply stacking layers was the problem of vanishing/exploding gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It has been primarily addressed by 36 2 Introducing the modalities normalized initialization and intermediate normalization layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' That enabled networks with tens of layers to start converging for stochastic gradient descent (SGD) with backpropagation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another obstacle was a degradation problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It occurs when the network depth increases, followed by saturating and then rapidly decreasing accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Overfitting is not caused by such degradation, and adding more layers to a suitably deep model leads to higher training error, which indicates that not all systems are similarly easy to optimize.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example, it was suggested to consider a shallower architecture and its deeper counterpart that adds more layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One way to avoid the degradation problem is to create a deeper model, where the auxiliary layers are identity mappings and other layers are copied from a shallower model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The deeper model should produce no higher training error than its shallower counterpart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, in practice it is not the case and it is hard to find comparably good constructs or better solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The solution to this degradation problem proposed by them is a deep residual learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Deep Residual Learning 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Residual Learning The idea of residual learning is to replace the approximation of underlying mapping H (x), which is approximated by a few stacked layers (not necessarily the entire net), with an approximation of residual function F(x) := H (x) − x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Here x denotes the inputs to the first of these layers, and it is assumed that both inputs and outputs have the same dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The original function changes its form F (x) + x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A counter-intuitive phenomenon about degradation motivated this reformula- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The new deeper model should not have a more significant training error when compared to a construction using identity mappings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, due to the degradation problem, solvers may have challenges approximating identity mappings by multiple non-linear layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Using the residual learning reformu- lation can drive the weights of the non-linear layers toward zero to approach identity mappings if they are optimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Generally, identity mappings are not optimal, but new reformulations may help to pre-condition the problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' When an optimal function is closer to an identity mapping than a zero mapping, finding perturbations concerning an identity mapping should be easier than learning the function from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Identity Mapping by Shortcuts Residual learning is adopted to every few stacked layers where a building block is defined: y = F (x, {Wi}) + x (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 State-of-the-art in Computer Vision 37 x and y present the input and output vectors of the layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='24 visualizes the building block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='24: Building block of residual learning (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The function F (x, {Wi}) represents the residual mapping that is to be learned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For the example with two layers from Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='24, F = W2σ (W1x) in which σ denotes the ReLU activation function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Biases are left out to simplify the notation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The operation F + x is conducted with a shortcut connection and element-wise addition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Afterward, a second non-linear (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', σ (y) transformation is applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The shortcut connections in Equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1) neither adds an extra parameter nor increases computation complexity and enables a comparisons between plain and residual networks that concurrently have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The dimensions of x and F in Equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1) must be equal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Alternatively, to match the dimensions, linear projection Ws by the shortcut connections can be applied: y = F (x, {Wi}) + Wsx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2) The square matrix Ws can be used in Equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, experiments showed that identity mapping is enough to solve the degradation problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore, Ws only aims to match the dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Although more levels are possible, it was experimented with function F having two or three layers without stating the exact form of it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Assuming F only has one layer (Equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1)) it is comparable to a linear layer: y = W1x+x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The theoretical notations are about fully-connected layers, but convolutional layers were used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The function F (x, {Wi}) can be applied to represent multiple convolutional layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Two feature maps are added element-wise, channel by channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Network Architectures Various plain/residual networks were tested to construct an efficient residual network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They trained the network on benchmarked datasets, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the ImageNet dataset, that are used for a comparison of network architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2) shows that every residual network needs a plain baseline network inspired by the VGG (Simonyan and Zisserman, 2014) network on which identity mapping by shortcuts is applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' x weight layer F(x) I relu x weight layer identity F(x) +x +relu38 2 Introducing the modalities Plain Network: The philosophy of VGG nets 41 mainly inspires the plain baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Two rules convolution layers, which usually have 3 × 3 filters, follow are: feature maps with the same output size have the same number of layers;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' reducing the size of a feature map by half doubles the number of filters per layer to maintain time complexity per layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Convolutional layers with a stride of 2 perform downsampling directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A global average pooling layer and a 1000-way fully-connected layer with softmax are at the end of the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The number of weighted layers sums up to 34 (Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='25, middle).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Compared to VGG nets, this model has fewer filters and lower complexity (Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='25, left).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Residual Network: Based on the above plain network, additional shortcut connections (Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='25, right) turn the network into its associate residual variant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The identity shortcuts (Equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1)) can be directly used in the case of the exact dimensions of the input and output (solid line shortcuts in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='25).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For the different dimensions (dotted line shortcuts in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='25), two options are considered: The shortcut still performs identity mapping, but with extra zero entries padded to cope with the increasing dimensions, without adding new parame- ters;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The projection shortcut in Equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2) matches dimensions (due to 1 × 1 convolutions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In both cases, shortcuts will be done with a stride of two when they go across feature maps of two sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 EfficientNet Until Tan and Le (2019b) introduced EfficientNet, it was popular to scale only one of the three dimensions – depth, width, or image size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The empirical study shows that it is critical to balance all network dimensions, which can be achieved by simply scaling each with a constant ratio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Based on this observation, a simple yet effective compound scaling method was proposed, which uniformly scales network width, depth, and resolution with a set of fixed scaling coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example, if 2N times more computational resources are available, increasing the network depth by αN, width by βN, and image size by γN would be possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Here α, β, γ are constant coefficients determined by a small grid search on the original miniature model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='26 illustrates the difference between this scaling method and conventional methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A compound scaling method makes sense if an input image is bigger because a larger receptive field requires more layers and more significant channel features to capture fine-grained patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Theoretically and empirically, there has been a special relationship between 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 State-of-the-art in Computer Vision 39 FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='25: Architecture of ResNet (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VGG-19 34-layer plain 34-layer residual image image andano size: 224 3x3 conv, 64 3x3 conv, 64 output pool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='/2 size: 112 3x3 conv, 128 3x3 conv, 128 7x7 conv, 64, /2 7x7 conv, 64, /2 pool, /2 pool, /2 pool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' /2 output size: 56 3x3 conv, 256 3x3 conv, 64 3x3 conv, 64 4 3x3 conv, 256 3x3 conv, 64 3x3 conv, 64 3x3 conv, 256 3x3 conv, 64 3x3 conv, 64 3x3 conv, 256 3x3 conv, 64 3x3 conv, 64 3x3 conv, 64 3x3 conv, 64 3x3 conv, 64 3x3 conv, 64 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' pool, /2 andno 3x3 conv, 128, /2 3x3 conv, 128, /2 4 size: 28 3x3 conv, 512 3x3 conv, 128 3x3 conv, 128 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 128 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 128 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 128 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 128 + 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 128 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 128 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 128 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 128 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 128 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 128 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 128 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 128 4 andno size: 14 pool,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' /2 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' /2 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' /2 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 4 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 本 本 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 4 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 4 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 4 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 4 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 256 output 4 size: 7 pool,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' /2 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' /2 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' /2 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512 3x3 conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 512 4 andno size: 1 fc 4096 avg pool fc 4096 fc 1000 fc 1000 fc 100040 2 Introducing the modalities network width and depth (Raghu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Existing MobileNets (Howard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017) and ResNets are used to demonstrated new scaling methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='26: Model scaling (Tan and Le, 2019b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Compound Model Scaling 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Problem Formulation A function Yi = Fi (Xi) with the operator Fi, output tensor Yi, input tensor Xi of shape (Hi, Wi, Ci), spatial dimensions Hi, Wi, and channel dimension Ci is called a ConvNet Layer i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A ConvNet N appears as a list of composing layers: N = F∥ ⊙ · · · F∈ ⊙ F∞ (X1) = � j = 1 · · · kF| (X1) Effectively, these layers are often partitioned into multiple stages and all layers in each stage share the same architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example, ResNet has five stages with all layers in every stage being the same convolutional type except for the first layer that performs down-sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore, a ConvNet can be defined as: N = � i=1···s F⟩ Li � X(Hi,Wi,Ci) � where F⟩ Li denotes layer F⟩ which is repeated Li times in stage i, and (Hi, Wi, Ci) is the shape of input tensor X of layer i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In comparison to the regular ConvNet focusing on the best layer architecture search F⟩, model scaling centers on the expansion of the network length (Li), width (Ci), and/or resolution (Hi, Wi) without changing F⟩ that was predefined in the baseline network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Although model scaling simplifies the design problem of the new resource constraints through fixing F⟩, a different large design space (Li, Hi, Wi, Ci) for each layer remains to be explored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To further reduce the design space, all layers are restricted to be scaled uniformly with a constant wider #channels wider deeper deeper layer_i higher higher 7 resolution Hxw resolution .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='resolution (b) width scaling (a) baseline (c) depth scaling (d) resolution scaling (e) compound scaling2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 State-of-the-art in Computer Vision 41 ratio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this case, the goal is to maximize the model’s accuracy for any given resource constraint, which is presented as an optimization problem: max d,w,rAccuracy (N (d, w, r)) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='N (d, w, r) = � I=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='s ˆFid· ˆ Li � X⟨r· ˆ Hi,r· ˆ Wi,w· ˆ Ci⟩ � Memory (N) ≤ targetMemory FLOPS (N) ≤ targetFlops where w, d, r are coefficients for scaling network width, depth, and resolution;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' � �Fi, �Li, �Hi, � Wi, �Ci � are predefined parameters of the baseline network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Scaling Dimensions The main difficulty of this optimization problem is that the optimal d, w, r depend on each other and the values are changing under different resource constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Due to this difficulty, conventional methods mostly scale ConvNets in one of these dimensions: Depth (d): One of the most significant networks previously described is the ResNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As it was described, the problem of ResNets is that accuracy gain of a very deep network diminishes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example, ResNet-1000 has similar accuracy to ResNet-101 even though it contains many more layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Width (w): Scaling network width is commonly used for small-sized models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, wide but shallow networks tend to have difficulty grasping higher-level features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Resolution (r): Starting from 224×224 in early ConvNets, modern ConvNets tend to use 299 × 299 or 331 × 331 for better accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' GPipe (Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2018) recently achieved state-of-the-art ImageNet accuracy with 480 × 480 resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Higher resolutions, such as 600 × 600, are also widely used in ConvNets for object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The above analyses lead to the first observation: Observation 1: Scaling up any network width, depth, or resolution dimension improves accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Without the upscaling, the gain diminishes for bigger models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Compound Scaling Firstly, it was observed that different scaling dimensions are not independent because higher resolution images also require to increase the network depth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The larger receptive fields can help capture similar features that include more pixels in bigger images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Similarly, network width should be increased when the resolution is higher to capture more fine-grained patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The intuition 42 2 Introducing the modalities suggests that different scaling dimensions should be coordinated and balanced rather than conventional scaling in single dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To confirm this thought, results of networks with width w without changing depth (d=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0) and resolution (r=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0) were compared with deeper (d=2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0) and higher resolution (r=2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0) networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This showed that width scaling achieves much better accuracy under the same FLOPS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These results lead to the second observation: Observation 2: To achieve better accuracy and efficiency, balancing the network width, depth, and resolution dimensions during ConvNet scaling is critical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Earlier researches have tried to arbitrarily balance network width and depth, but they all require tedious manual tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A new compound scaling method, which uses a compound coefficient ϕ to uniformly scale network width, depth, and resolution in a principled way was proposed: depth:⌈ = αϕ width:⊒ = βϕ resolution:∇ = γϕ s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='α · β2 · γ2 ≈ 2 α ≥ 1, β ≥ 1, γ ≥ 1 (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3) where α, β, γ are constants that can be determined by a small grid search, ϕ is a user-specified coefficient that controls how many more resources are available for model scaling, while α, β, γ specify how to assign these extra resources to the network width, depth, and resolution, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Notably, the FLOPS of a regular convolution operation is proportional to d, w2, r2, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', doubling network depth will double the FLOPS, but doubling network width or resolution will increase the FLOPS by four times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Scaling a ConvNet following Equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3) will approximately increase the total number of FLOPS by � α · β2 · γ2�ϕ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this chapter, α · β2 · γ2 ≈ 2 is constrained such that for any new ϕ the total number of FLOPS will approximately increase by 2ϕ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 EfficientNet Architecture A good baseline network is essential because model scaling does not affect its layer operators F ∗ [i].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore this method is also estimated on ConvNets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A new mobile-sized baseline called EfficientNet was developed to show the effectiveness of the new scaling method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Metrics that were used to estimate the efficacy are accuracy and FLOPS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The baseline efficient network that was created is named EfficientNet-B0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Afterwards, this compound scaling method is applied in two steps: STEP 1: By fixing ϕ = 1 and, assuming twice more resources available, a small grid search of α, β, γ based on Equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3) showed that the best 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 State-of-the-art in Computer Vision 43 values for EfficientNet-B0 are α = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2, β = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1, γ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='15 under the constraint of α · β2 · γ2 ≈ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' STEP 2: Afterwards, fix α, β, γ as constants and scale up the baseline network with different ϕ using Equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3) to construct EfficientNet-B1 to B7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Name Number of parameters EfficientNet-B0 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3M parameters EfficientNet-B1 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8M parameters EfficientNet-B2 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2M parameters EfficientNet-B3 12M parameters EfficientNet-B4 19M parameters EfficientNet-B5 30M parameters EfficientNet-B6 43M parameters EfficientNet-B7 66M parameters Indeed, even better performance is achievable by searching for α, β, γ directly around a large model, but the search cost becomes prohibitively more expensive on larger models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This method searches once on a small baseline network, then scales the coefficient for all other models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Results and comparison of the networks To demonstrate the performance of both networks, ResNet and EfficientNets were trained and evaluated on the ImageNet 2012 classification dataset con- sisting out of 1000 classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Since deeper scaling should provide better results in the case of ResNet, it was trained with increased depth each time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' First meaningful results were obtained in ResNet-34, which performed 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 % better than plain-34 baseline when top-1 accuracy is compared.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They also compared three versions of ResNet: (A) zero-padding shortcuts (increasing dimensions, all shortcuts are parameter-free) (B) projection shortcuts (increasing dimen- sions, other shortcuts are identity), and (C) all shortcuts are projections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Each version improved both, the top-1 and top-5 accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Afterward, the depth of the network was increased and ResNet-50, ResNet-101, and ResNet-152 were created.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Each increase in depth leads to higher accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In deeper models, the trade-off between accuracy increase and deeper model is not worth describing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' All results are shown in the following table: Model top-1 acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' top-5 acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VGG-16 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='93 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='67 GoogLeNet 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='85 plain-34 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='46 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='98 ResNet-34 A 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='97 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='24 44 2 Introducing the modalities Model top-1 acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' top-5 acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ResNet-34 B 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='48 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='54 ResNet-34 C 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='81 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 ResNet-50 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='15 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='29 ResNet-101 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='25 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='95 ResNet-152 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='57 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='29 In the case of EfficientNets, the results achieved by the previous state-of-the-art networks on the same ImageNet dataset were aimed to improve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Among all state- of-the-art networks, EfficientNets were compared with ResNets-50 and ResNet- 152.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They compared the results of networks deviated by changing scaling parameters EfficientNet-B0 to EfficientNet-B7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The results of each network were better than the previous one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Also, they have shown that EfficientNet-B0 outperforms ResNet-50 and that EfficientNet-B1 outperforms ResNet-152.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This means that scaling through all three dimensions can provide better results than scaling through just one dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The drawback of this approach is the computational power which makes it less popular than the previous methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Again, all results are shown in the following table: Model top-1 acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' top-5 acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' EfficientNet-B0 / ResNet-50 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 / 76 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 / 93 EfficientNet-B1 / ResNet-152 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 / 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 / 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 EfficientNet-B2 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 EfficientNet-B3 / ResNeXt-101 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 / 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 / 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 EfficientNet-B4 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 EfficientNet-B5 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 EfficientNet-B6 84 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 EfficientNet-B7 / GPipe 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 / 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 97 / 97 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 Contrastive learning In recent years the problem of classification of unlabeled dataset is becoming more widespread.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' More unlabeled datasets requiring human labeling are created in fields like medicine, the automotive industry, military, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Since the process is expensive and time-consuming, researchers assumed it could be automated with contrastive learning frameworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One of the first and most known contrastive learning frameworks is SimCLR (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The advantage of this framework is its simplicity, yet it achieves high accuracy on classification tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The main idea is to have two copies of the image, which are then used to train two networks and that are compared.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The problem with this framework is that it doubles the size of the dataset and reaches among all images, which can be computationally infeasible for large datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Bootstrap Your Own Latent 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 State-of-the-art in Computer Vision 45 (Grill et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020b) was introduced to avoid making double-sized datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The idea was to bootstrap image representations to avoid unnecessary image comparisons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These two frameworks will be described in this chapter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Further improvements in the choice of creating two views of images and comparison techniques were presented in different frameworks such as Nearest- Neighbor Contrastive Learning (NNCLR) (Dwibedi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021), Open World Object Detection (ORE) (Joseph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021), Swapping Assignments between multiple Views (SwAV) {Caron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020)}, and many more.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This field is a constant research topic and new improved frameworks are proposed on a constant basis to help researchers solve different tasks that requires labeled datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 A Simple Framework for Contrastive Learning of Visual Representations Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020a) intended to analyze and describe a better approach to learning visual representations without human supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They have introduced a simple framework for contrastive learning of visual representations called SimCLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As they claim, SimCLR outperforms previous work, is more straightforward, and does not require a memory bank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Intending to understand what qualifies good contrastive representation learning, the significant components of the framework were studied and resulted in: A contrastive prediction task requires combining multiple data augmen- tation operations, which results in effective representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Unsupervised contrastive learning benefits from more significant data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The quality of the learned representations can be substantially improved by introducing a learn-able non-linear transformation between the representation and the contrastive loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Representation learning with contrastive cross-entropy loss can be improved by normalizing embeddings and adjusting the temperature parameter appro- priately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Unlike its supervised counterpart, contrastive learning benefits from larger batch sizes and extended training periods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Contrastive learning also benefits from deeper and broader networks, just as supervised learning does.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 The Contrastive Learning Framework Like for SimCLR, a contrastive loss is used to learn a representation by maximizing the agreement between various augmented views of the same data example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This framework contains four significant components, which are shown in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='27: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A stochastic data augmentation module 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A neural network base encoder 46 2 Introducing the modalities 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A small neural network projection head 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A contrastive loss function FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='27: A simple framework for contrastive learning of visual repre- sentations (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Stochastic data augmentation module First, the minibatch of N examples is sampled randomly, and the contrastive prediction task is defined on pairs of augmented examples, resulting in 2N data points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A memory bank was not used to train the model, instead, the training batch size varies from 256 to 8192.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Any given data example randomly returns two correlated views of the same example, denoted ˜xi and ˜xj, which is known as a positive pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Negative pairs are all other 2(N − 1) pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In one view, some data augmentation techniques are applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Data augmentation is widely embraced in supervised and unsupervised representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Unfortunately, it has not been used to define the contrastive prediction task, which is mainly determined by changing the architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It was shown that choosing different data augmentation techniques can reduce the complexity of previous contrastive learning frameworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' There are many data augmentation operations, the focus was on the most common ones, which are: Spatial geometric transformation: cropping and resizing (with horizontal flipping), rotation and cutout, Appearance transformation: color distortion (including color dropping), brightness, contrast, saturation, Gaussian blur, and Sobel filtering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='28: Augmentation texhniques (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Maximize agreement Zi g() g(-) hi ← Representation → hi f(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=') f(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=') ci i(a) Original (b) Crop and resize ( (c) Crop, resize (and flip) (d) Color distort.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (drop) (e) Color distort.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (jitter (f)Rotate (90°,180°,270°) (g) Cutout (h) Gaussian noise (i) Gaussian blur (i) Sobel filtering2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 State-of-the-art in Computer Vision 47 Due to the image sizes in the ImageNet dataset, all images were always ran- domly cropped and resized to the same resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Later on, other targeted data augmentation transformations were applied to one branch, remaining the one as original i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' t (xi) = xi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Applying just individual transformation is insuf- ficient for the model to learn good representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The model’s performance improves after composing augmentations, although the contrastive prediction task becomes more complex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The composition of augmentations that stood out were random cropping and random color distortion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It was also observed that stronger color augmentation significantly improves the linear evaluation of unsupervised learned models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Stronger color augmen- tations do not enhance the performance of supervised learning models when trained with the same augmentations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Based on the experiments, unsuper- vised contrastive learning benefits from stronger color data augmentation than supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Neural network base encoder Neural network based encoder f (·) extracts multiple representation vectors from the augmented data examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This framework does not restrict a choice of the network architecture, although for simplicity, the commonly used ResNet was picked and gives hi = f (˜xi) = ResNet (˜xi) where hi ∈ Rd is the output after the average pooling layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Although increasing depth and width improves performance, the ResNet-50 was chosen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Furthermore, when the model size increases, the gap between supervised and unsupervised learning shrinks, suggesting that bigger models benefit more from unsupervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Small neural network projection head A small neural network projection head g (·) maps the representation to the space where the contrastive loss is applied to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The importance of including a projection head, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', g (h) was evaluated and they considered three different architectures for the head: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' identity mapping, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' linear projection, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the default non-linear projection with one additional hidden layer and ReLU activation function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The results showed that a non-linear projection head is better than a linear projection and much better than no projection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It improves the representation quality of the layer that is applied previous to it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They have used a MLP with one hidden layer to obtain zi = g (hi) = W (2)σ � W (1)hi � where σ is a ReLU non-linearity transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This step is performed because defining the contrastive loss on zi instead of on hi would not lead to a loss of information caused by contrastive loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Especially, 48 2 Introducing the modalities z = g (h) is trained to be invariant to data transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As a result, g can remove information useful for a downstream task such as object color or orientation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Using the non-linear transformation g (∗), h can maintain and form more information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Contrastive loss function Given a set {˜xik} including a positive pair of examples ˜xi and ˜xj, the contrastive prediction task aims to identify ˜xi in {˜xi}k̸=i for a given ˜xi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the case of positive examples, the loss function is defined as ↕i,j = − log exp � sim(zi,zj) τ � �2N k=1 I[k̸=i] exp � sim(zi,zk) τ � where I[k̸=i] ∈ {0, 1} is an indicator function, τ denotes a temperature pa- rameter and sim (u,v) = uT v ∥u∥∥v∥ is a dot product between ↕2 and normalized u, v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The final loss is calculated across all positive pairs, both (i, j) and (j, i), in a mini-batch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It was named NT-Xent, the normalized temperature-scaled cross-entropy loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The NT-Xent loss was compared against other commonly used contrastive loss functions, such as logistic loss and margin loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Gradient analysis shows that l2 normalization, cosine similarity, and temperature together effectively weight different examples and a suitable temperature can make the model learn from hard negatives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The advantage of NT-Xent is that it weights the negatives by their relative hardness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Without normalization and proper temperature scaling the performance is significantly worse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Also, the contrastive task accuracy is higher, but the resulting representation is worse under linear evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Bootstrap Your Own Latent The fundamental idea of contrastive learning is to create pairs of images on which the framework would be trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Creating negative pairs relies on large batch sizes, memory banks, or customized mining strategies which can be challenging in larger datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Grill et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020b) wanted to create a new approach that would achieve better performance than other contrastive methods without using negative pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A solution they have introduced is a method called Bootstrap Your Own Latent (BYOL).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The idea was to bootstrap representations of images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As a result, BYOL is more robust to the choice of image augmentations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Furthermore, BYOL has two neural networks, called online and target network, who interact and learn from each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Using an augmented view of an image, BYOL trains its online network to predict the target network’s representation of another augmented view.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This approach achieved state-of-the-art results when trained on the ImageNet dataset under 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 State-of-the-art in Computer Vision 49 the linear evaluation protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Additionally, compared to SimCLR, a strong contrastive baseline, BYOL suffers from much less performance drop when only random crops are used to augment images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Description of the method BYOL aims to learn a representation of yθ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It uses two neural networks: online and the target network to achieve that.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The online network is determined by a set of weights θ and consists of: an encoder fθ, a projector gθ, a predictor qθ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='29: Bootstrap Your Own Latent (Grill et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The target network has the same architecture as the online network but uses different weights ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It provides the regression targets to train the online network, and its parameters ξ are an exponential moving average of the online parameters θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Precisely, given a target decay rate τ ∈ [0, 1], after each training step, the following update ξ ← τξ + (1 − τ)θ is performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Firstly, an image is sampled uniformly from D from which two distributions of image augmentations T and T ′ are created.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' BYOL applies respectively two image augmentations t ∼ T and t′ ∼ T ′ creating two aug- mented views v ≜ t(x) and v′ ≜ t′(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' First augmented view v is used for the online network and result in the output yθ ≜ fθ(v) and afterwards the projection zθ ≜ gθ(y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Similarly, from the second augmented view v′ the target network outputs y′ ξ ≜ fξ(v′) and the target projection z′ ξ ≜ gξ(y′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Later on output a prediction of qθ (zθ) of z′ ξ and ℓ2-normalize both qθ (zθ) and z′ ξ to qθ (zθ) ≜ qθ (zθ) / ∥qθ (zθ)∥2 and ¯z′ ξ ≜ z′ ξ/ ��z′ ξ �� 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The predictor is only applied to the online pipeline, making the architecture asymmetric between the online and target pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Lastly, the following mean view representation projection prediction fe ge b input image t ye qe(z0) online loss + target ft 9s sg50 2 Introducing the modalities squared error between the normalized predictions and target projections is defined: Lθ,ξ ≜ ��qθ (zθ) − ¯z′ ξ ��2 2 = 2 − 2 · � qθ (zθ) , z′ ξ � ∥qθ (zθ)∥2 · ���z′ ξ ��� 2 The loss is symmetrized Lθ,ξ by using v′ for the online network and v for the target network separately to calculate �Lθ,ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At each training step, a stochastic optimization step is applied to minimize LBYOL θ,ξ = Lθ,ξ + �Lθ,ξ with respect to θ only but not ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' BYOL’s dynamics are summarized as θ ← optimizer � θ, ∇θLBYOL θ,ξ , η � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' where η is a learning rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At the end of the training, only the encoder fθ is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Comparison of contrastive learning frameworks Of all frameworks, SimCLR is the most popular due to its simplicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The ResNet-50 in 3 different hidden layer widths (width multipliers of 1×, 2×, and 4×) were used and trained for 1000 epochs each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The accuracy of these frameworks on the ImageNet dataset with few labels improved when the width of ResNet-50 increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For SimCLR with ResNet-50 top-1 accuracy is 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 and top-5 accuracy is 89, while for ResNet-50(4x) top-1 accuracy is 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 and top-5 accuracy is 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These results are comparable with supervised methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The BYOL framework was built to improve the results of SimCLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It was also stated that the accuracy for the baseline ResNet-50 is 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 and 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 for top-1 accuracy and top-5 accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' When using ResNet-50(4x), an increase in accuracy to 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 and 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 for top-1 and top-5 is observed, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' More information about performance can be found in following table: Model Architecture Param (M) top-1 acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' top-5 acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' SimCLR ResNet-50 24 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 SimCLR ResNet-50 (2x) 94 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 SimCLR ResNet-50 (4x) 375 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 BYOL ResNet-50 24 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 BYOL ResNet-50 (x2) 94 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 BYOL ResNet-50 (x4) 375 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 BYOL ResNet-200 (x2) 250 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 State-of-the-art in Computer Vision 51 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 Transformers in Computer Vision Since the first appearance of the Transformers architecture in 2017 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', it has become an irreplaceable part of all-natural language processing (NLP) models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The main advantage of Transformers is that they can be trained on a large text corpus and then fine-tuned on a smaller task-specific dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This enabled model training of unspecified size with more than 100B parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, computer vision still relied on convolutional architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With datasets constantly growing and the diversity of the fields computer vision tasks could be applied to, researchers wanted to implement Transformers architecture in the CV field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Some works aim for combining CNN-like architectures with self-attention (Wang and Li, 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Others attempted to replace convolutions entirely, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Ramachandran et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Due to specialized attention patterns, the problem was that they have not yet been scaled effectively on modern hardware accelerators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore, in large-scale image recognition, classic ResNet-like architectures are still state-of-the-art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In 2021 the Google research Brain Team published the paper “An image is worth 16 × 16 words” where they introduced new Transformers-based architecture for CV called Vision Transformers (ViT) (Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Based on the success of Transformer in NLP scaling, they aimed to apply standard Transformer directly to images with little as possible changes to the existing architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The image is split into patches and linear embeddings of these patches are provided as inputs to the Transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These patches are the same as tokens (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' words) in NLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The model is trained for image classification in a supervised learning fashion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Vision Transformers Brain Team wanted to create simple but universally scalable architecture to follow the original Transformers architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Method Compared to NLP, with 1-dimensional token embedding input for the Trans- former, images are 2-dimensional objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Firstly, images needed to be rep- resented differently to imitate original architectures as close as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For that reason image x ∈ RH×W ×C is reshaped into a sequence of flattened 2-dimensional patches xp ∈ RN×(P 2·C), where (H, W) is the resolution of the original image, C is the number of channels, (P, P) is the resolution of each image patch, and N = HW/P 2 is the resulting number of patches, also the Transformer’s effective input sequence length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The Transformer input through all layers is a fixed vector of size D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The first step is to flatten the patches, usu- ally 16 × 16 and map them to D dimensions with a trainable linear projection to create patch embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 52 2 Introducing the modalities FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='30: Vision Transformer (Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' z0 = � xclass ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' x1 pE;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' x2 pE;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' · · · ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' xN p E � + Epos, E ∈ R(P 2·C)×D, Epos ∈ R(N+1)×D To this sequence of “patch embeddings”, a prefix learnable [class] token, like in BERT, is usually added.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This token z0 0 = xclass tells the model to classify the image and increases the dimension of vector z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Also, the state of this token at the output of the Transformer encoder � z0 L � , on which the layernorm is applied, serves as the image representation y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' y = LN � z0 L � Furthermore, it is the only one to which the classification head is attached to during pre-training and fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The classification head during pre- training is compiled of MLP with one hidden layer and a single linear layer at a fine-tuning time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Position embedding, a standard learnable 1-dimensional position embedding, are attached to the patch embeddings, serving as input to the encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The standard Transformer encoder consists of alternating layers of multiheaded self-attention and MLP blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' After each block, a residual connection is applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' z′ ℓ = MSA (LN (zℓ−1)) + zℓ−1, ℓ = 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' L zℓ = MLP (LN (z′ ℓ)) + z′ ℓ, ℓ = 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' L Vision Transformer has a significantly lower inductive bias than CNNs in image- specific information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VIT only has local and translational equivariant MLP layers, while the self-attention layers are global.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A 2-dimensional neighborhood structure is used sparingly: the image is cut into patches at the beginning Vision Transformer (ViT) Transformer Encoder Class Bird MLP Ball Head Car MLP Norm Transformer Encoder Patch + Position Multi-Head 13 1[5] 118 Embedding Attention Extra learnable Linear Projection of Flattened Patches [class] embedding Norm Embedded Patches2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 State-of-the-art in Computer Vision 53 and the position embeddings are resized as needed at the fine-tuning time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Alternatively, the input sequence can consist of a CNN’s feature map on which the patch embedding projection is applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Vision Transformers are pre-trained on large datasets and fine-tuned to (smaller) downstream tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For fine-tuning, a projection head is removed and a zero-initialized D × K feedforward layer is attached with K being the number of downstream classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is also beneficial to use higher resolution then in pre-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Also ViT can handle arbitrary sequence lengths but the pre-trained position embeddings can become sufficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is necessary to point out that resolution adjustment and patch extraction are the only points at which an inductive bias about the 2-dimensional structure of the images is manually injected into the Vision Transformers 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Experiments Similarly to BERT models, multiple versions of the model at various scales were created.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They have created Base = “B”, Large = “L”, Huge = “H” versions of ViT, with 12, 24 and 32 layers and 86M, 307M and 632M parameters respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To explore the model scalability, the previous mentioned dataset ImageNet was used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In addition, ViT was compared against a slightly modified ResNet called “ResNet(BiT)”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The batch Normalization layer was replaced with Group Normalization and used standardized convolutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another network that it was compared to was Noisy Student (Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019), a large EfficientNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Experiments showed that ViT Hughe with 14×14 input patch size outperformed both CNN-based networks with an accuracy of 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5%, whereas ResNet BiT had 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='54% and Noisy Student 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is worth mentioning that ViT Large with 16 × 16 input patch size had 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='76% accuracy on the same dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another thing worth pointing out is that ViT outperforms CNN-based architectures on all larger datasets yet performs slightly worse than CNN networks on a smaller dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 Conclusion In this chapter, the authors presented some of the current state-of-the-art approaches in Computer Vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Nowadays, when technology is advancing each day, creating networks that would imitate human brain is more challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Still, the networks presented in this chapter are highly accurate and creating network which can out-perform them is challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Furthermore, it is noticeable that the application of CV is dictating the development of networks and frameworks which help humans with their everyday tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 54 2 Introducing the modalities 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Resources and Benchmarks for NLP, CV and multi- modal tasks Author: Christopher Marquardt Supervisor: Christian Heumann When we see athletes perform in their sports we only see the results of their hard work prior or till to the event.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Most of the time they casually talk about their off-season, but everybody knows the results are made in the off-season.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Same goes for the models we will see in the later chapters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' We are just interested in the results, but why and how does the model come to these results?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It has to learn to some key fundamentals of the modality to achieve these results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But how do they get them to perform in such a way or even better?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It’s possible to build better architectures and/or use more and new data to achieve this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' New data by hand is easy to get but this new data results in a new problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' New data has to be carefully labeled by humans, which can be very expensive by the amount of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Models which learn from labeled data use the supervised learning strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This learning strategy is a bottleneck for future progress, because of the given reason.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But the need for labeling the data isn’t the only problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Let’s visit the athlete analogy again.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Imagine a professional football player has to participate in a professional ski race.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' He will not be able to compete with the others, because they are trained only to do ski races.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Here see the other problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Models which use supervised learning have shown to perform very well on the task they are trained to do.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This means models which learn on carefully labeled data only perform very well on this specific task, but poor on others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Also it’s not possible to label everything in the world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So the goal is to generate more generalist models which can perform well on different tasks without the need of huge labeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Humans are able to perform well on different tasks in a short amount of time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Humans, for example, only need a small amount of hours to learn how to drive a car, even without supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On the other hand fully automated driving AI need thousand of hours of data to drive a car.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Why do humans learn so fast compared to machines?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Humans don’t rely on labeled data, because most of the time humans learn by observation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' By this humans generate a basic knowledge of how the world works, which also called common sense.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This enables us to learn so much faster compared to machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Meta AI (Yann and Ishan, 2021) believes that self-supervised learning is one of the most promising ways to generate background knowledge and some sort of common sense in AI systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' By self-supervised learning one means a supervised learning algorithm, but it doesn’t need an external supervisor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Self-supervised pre-training differs 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Resources and Benchmarks for NLP, CV and multimodal tasks 55 between the modalities, which means there is not an approach which works in all modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The following chapter will inspect on the one hand pre-training resources and the use of them and on the other hand also the benchmarks which are used for Natural Language Processing (NLP), Computer Vision (CV) and ,the combination of both, vision language pre-trained models (VL-PTM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Datasets After pointing out that pre-training is very important, one might ask how do the datasets look and how do the different modalities pre-train?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At first we will inspect the former one and focus afterwards on the use of the resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As one might expect NLP models pre-train on text, CV models pre-train on images and VL-PTM pre-train on text image pairs, which can somehow be seen as a combination of NLP and CV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But CV models mostly used labeled data like a picture of a dog with the corresponding single label “dog”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' MML datasets can contain several sentences of text which correspond to the given image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Even if the datasets might be completely different, the procedure to get the data is mostly the same for all of them, because the data is crafted from the internet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This can lead to a problem, since by using this method the resulting dataset might be noisy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One approach for the VL-PTM, for example, is to use CommonCrawl and extract the image plus the alt of an image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The alt is an alternate text for an image, if the image cannot be displayed or for visual impaired people.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This seems like a reasonable approach, but the alt is often not very informative about what’s in the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another difference between the modalities is the cardinality of the pre-training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It’s easy to realize that text is by far easiest to crawl from the internet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This results in huge high-quality massive text data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Some magnitudes smaller are the datasets for CV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Since VL-PTM are pretty new compared to the other modalities it still relatively small, but growing fast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A small downer is that some of the datasets are not public available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The big companies like to keep their models and used datasets private, which hinders the reproducibility, but there are also real open AI competitors like LAION and Eleuther in the field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The next chapter will provide some of the most used pre-training datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Natural Language Processing Datasets 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Common Crawl As already mentioned, extracting text from the internet is rather easy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' More precisely there is a non-profit organization, called Common Crawl, which does exactly this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They provide copies of the internet to researchers, companies and individuals at no cost for the purpose of research and analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The Common Crawl corpus contains petabytes of data collected since 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Every month, Common Crawl releases a snapshot of the web obtained by randomly exploring 56 2 Introducing the modalities and sampling URLs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It contains raw web page data, extracted metadata and text extractions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The advantages of Common Crawl come along with their disadvantages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The text is from diverse domains but with varying quality of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To handle the raw nature of the datasets one often has to use a well-designed extraction and filter to use the datasets appropriately (Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' GPT-3 ,for example, uses a filtered version of Common Crawl, which consists of 410 billion tokens (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So data for NLP is freely available but one needs to use well-designed extraction and filtering to really use the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 The Pile Recent work (Rosset, 2020) showed that diversity in training datasets improves general cross-domain knowledge and downstream generalization capability for language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The Pile (Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020) was introduced to address exactly these results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The Pile contains 22 sub-datasets, including established NLP datasets, but also several newly introduced ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The size of the 22 sub-datasets, which can be categorized roughly into five categories, pile up to around 825 GB of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The following treemap shows the distribution of the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' While only 13% of the world’s population speaks English, the vast majority of NLP research is done on English.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020) followed this trend, but did not explicitly filtered out other languages when collecting our the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This leads to the fact that roughly 95% of the Pile is English.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Also EuroParl (Koehn, 2005), a multilingual parallel corpus introduced for machine translation, is included in the Pile.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To train GPT-2 Open AI collected data from WebText.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' WebText is an internet dataset created by scraping URLs extracted from CompositionofthePilebyCategory Academic = Internet - Prose Dialogue = Misc Bibliotik Pile-CC PG-19 BC2 PubMedCentral ArXiv Subtitles StackExchange IRC EP PMA Github FreeLaw USPTO Phil NIH OpenWebText2 Wikipedia DM Math HNYT2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Resources and Benchmarks for NLP, CV and multimodal tasks 57 Reddit submissions with a minimum score for quality, but sadly it was never released to the public.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Independent researchers reproduced the pipeline and released the resulting dataset, called OpenWebTextCorpus (Gokaslan and Cohen, 2019) (OWT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Eleuther created an enhanced version of the original OWT Corpus called OpenWebText2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It covers all Reddit submissions from 2005 up until April 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It covers content from multiple languages, document metadata, multiple dataset versions, and open source replication code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They also explicitly included a dataset of mathematical problems (DeepMind Mathematics) to improve the mathematical ability of language models trained on the Pile.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An ArXiv dataset was in included in the hopes that it will be a source of high quality text and math knowledge, and benefit potential downstream applications to research in these areas and also because arXiv papers are written in LaTeX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Training a language model to be able to generate papers written in LaTeX could be a huge benefit to the research community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Since CC needs further steps, due to the raw nature of CC, to really use is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Pile-CC is Common Crawl-based dataset, which can be used directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It yields higher quality output than directly using the WET files.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These were only some of the 22 included datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A more detailed description of the sub-dataset and the reasons why these were included can be found in the corresponding paper (Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Multilingual Datasets Another pre-cleaned version of CC is CC-100 (Wenzek et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They present a pipeline to create curated monolingual corpora in more than 100 languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A filter, which covers the data based on their distance to Wikipedia, is used and this improves the quality of the resulting dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, its English portion is much smaller than the Pile.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But a multilingual dataset might help a low-resource language acquire extra knowledge from other lan- guages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Perhaps the most multilingual corpus publicly available, containing 30k sentences in over 900 languages, is the Bible corpus (Mayer and Cysouw, 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Till now all datasets were freely available and almost directly usable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The next one is not public available for some reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To provide mT5 (Xue et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020), which is multilingual pre-trained text-to- text transformer, a suitable pre-training dataset, Google Research designed a dataset including more than 100 languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The dataset is called mC4 (Xue et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Since some languages are relatively scarce on the internet, they used all of the 71 monthly web scrapes released so far by Common Crawl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It contains 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 billion pages and 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 trillion tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A smaller version of the mC4 is also used by Google Research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The smaller dataset C4 (Colossal Clean Common Crawl) was explicitly designed to be English only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The C4 dataset is a collection of about 750GB of English-language text sourced from the public Common Crawl web.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 58 2 Introducing the modalities Most of the datasets used in NLP are derived entirely from Common Crawl and Rosset (2020) came to the result, that the current best practice in training large-scale language models involve using both large web scrapes and more targeted, higher-quality datasets, which the Pile directly addresses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 BooksCorpus The last dataset for NLP is the BooksCorpus dataset (Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The BooksCorpus uses books from yet unplished authors from the web.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Only books with more than 20k words were included to filter out shorter, noisier stories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This results in around 11k books from 16 different genres.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So more than 74 million sentences can be used in pre-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' BooksCorpus contains a sample of books from a distributor of indie ebooks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Sadly a datasheet about the BooksCorpus was not releasd with the corresponing paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Frankly, there was just an paragraph about the content and the extraction inside the paper (Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Bandy and Vincent (2021) addressed exactly this short coming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They provided a retrospective datasheet about the BooksCorpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Some of their major concerns were copyright violations, duplicate books, skewed genre representation, potentially skewed religious representation and also problematic content (18+ content).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Little harm can be expected if an informed adults reads books with these concers, but how does a language model contribute to for example well-documented gender discrimination if it trains on these books.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Since BookCorpus is no longer distributed, one has to visit the distributor of the indie ebooks and collect a own version of the BookCorpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is one of the user-based dataset, besides to the datasets of the Pile.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Computer Vision Dataset 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 ImageNet The next inspected modality is CV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Almost every state-of-the-art CV model uses a classifier pre-trained on an ImageNet based dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ImageNet uses the hierarchical structure of WordNet (Fellbaum, 2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At the release of ImageNet-1k the amount of classes was unheard at this time point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Datasets like CIFAR-10 (Krizhevsky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2009) and CIFAR-100 (Krizhevsky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2009) had 10 or 100 classes, but ImageNet1k had 1000 different classes and this was not the only major improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They also increased the resolution from 32×32 to 256×256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In all, there are roughly 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 million training images, 50,000 validation images, and 150,000 testing images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The ImageNet-1k dataset is a subset of the ImageNet dataset (Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The full ImageNet dataset is also called ImageNet-21k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It consists of more than 14 million images, divided in almost 22k classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Because of this some paper described it as ImageNet-22k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Those two dataset do not only differ by the amount of classes, but also by the type of labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The labels of ImageNet-21k are not mutually exclusive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Resources and Benchmarks for NLP, CV and multimodal tasks 59 Because of this the pre-training wiht ImageNet-1k is far more popular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Also the ImageNet-21k dataset lacks an official train-validation split, which is just another reason why ImageNet-1k is more popular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The raw dataset ImageNet- 21k is around 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 terabyte (TB).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It’s also nice, that the the dataset of ImageNet are open available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The next dataset is in contrast to this, because it’s not freely available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Joint-Foto-Tree (JFT) & Entity-Foto-Tree (EFT) The Joint-Foto-Tree (JFT) 300M is one of the follow up version of the JFT dataset (Hinton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Given the name it consists of 300 million images and on average each image has 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='26 labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The whole datasets has around 375 million labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These labels can be divided into 18291 classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These categories form a rich hierarchy with the maximum depth of hierarchy being 12 and maximum number of child for parent node being 2876 (Sun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example there are labels for 1165 types of animals and 5720 types of vehicles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The work states that approximately 20% of the labels in this dataset are noisy (Sun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017), because the labels are generated automatically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It also provides the fact, that the distribution is heavily long-tailed, which means that some of the classes have less than 100 images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' There is also an extendend version of the JFT dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It’s called Entity-Foto-Tree (EFT), because the class labels are physical entities organized in a tree-like hierarchy, which contains 20 diversified verticals and consists of 100k classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It’s even rarely used in practice by Google because of the intolerable large model size and the slow training speed (Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Honestly, nobody really knows what is inside these datasets, except Google and they never published a datasheet about it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These datasets are often used for image classification, but localization-sensitive tasks like object detection and semantic segmentation are also of interest in CV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Objects365 Objects365 (Shao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019) is a large-scale object detection and semantic segmentation freely available dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It contains 365 object categories with over 600K training images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' More than 10 million, high-quality bounding boxes are manually labeled through a three-step, carefully designed annotation pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The ImageNet datasets also contain bounding boxes, but compared Object365 dataset the number of boxes per image is about 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 vs 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 (Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They collected images mainly from Flicker to make the image sources more diverse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' All the images conform to licensing for research purposes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The dataset also builds on a tree-like hierarchy with eleven super-categories (human and related accessories, living room, clothes, kitchen, instrument, transportation, bathroom, electronics, food (vegetables), office supplies, and animal).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Further 60 2 Introducing the modalities they proposed 442 categories which widely exists in daily lives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As some of the object categories are rarely found, they first annotate all 442 categories in the first 100K images and then they selected the most frequent 365 object categories as their target objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To enable compatibility with the existing object detection benchmarks, the 365 categories include the categories defined in Microsoft Common Objects in Context (COCO) (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014b), which is described in the next paragraph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Microsoft Common Objects in Context (COCO) Microsoft decided to employed a novel pipeline for gathering data with extensive use of Amazon Mechanical Turk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Their goal was to create a non-iconic image collection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Iconic-object images have a single large object in the centered of the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' By this they provide high quality object instances, but they also lack information of contextual important and non-canonical viewpoints (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Recent work showed that non-iconic images are better at generalizing (Torralba and Efros, 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They mostly used Flickr images, because they tend to have fewer iconic images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This results in a collection of 328,000 images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' After getting the images they used workers on Amazon’s Mechanical Turk for the annotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The workers got a list with 91 categories and 11 super-categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At first a worker had to decide if a super-category (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' animal) was present or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' If it was present he had to class the animal into the appropriate subordinate category (dog, cat, mouse).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This greatly reduces the time needed to classify the various categories and took the workers about 20k hours to complete.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' After this the workers had also to do instance spotting and instance segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For the instance segmentation the workers had to complete a training task until their segmentation adequately matched the ground truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Only 1 in 3 workers passed this training stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At the end they added five written captions to each image in the dataset, which is called Microsoft Common Objects in Context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At the end they utilized more than 70,000 worker hours to collect a amount of annotated object instances, which were gathered to drive the advancement of segmentation algorithms and others tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' COCO is a dataset which can be used in CV and also in multi-modal models, because of the image-text pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Multi Modal Datasets The Pile is an attempt from Eleuther to mimic the dataset used for GPT-3 and LAION wants to achieve something similiar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Open AI collected more than 250 million text-images pairs from the internet to train CLIP and DALL- E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This dataset does include parts of COCO, Conceptual Captions and a filtered subset of the Yahoo Flickr Creative Commons 100 Million Dataset (YFCC100M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' YFCC100M contains of a total of 100 million media objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The collection provides a comprehensive snapshot of how photos and videos were taken, described, and shared over the years, from the inception of Flickr 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Resources and Benchmarks for NLP, CV and multimodal tasks 61 in 2004 until early 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Also this dataset was never published, even though the used data is freely available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To address this shortcoming, LAION created the LAION-400M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 LAION 400M & 5B LAION-400M (Schuhmann et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a) consists of 400 million image-text pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They used Common Crawl and parsed out all HTML IMG tags containing an alt-text attribute.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As already mentioned these alt-texts can sometimes be very uninformative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So they used CLIP to compute embeddings of the image and alt-text and droped all samples with a similarity below 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The dataset also contains the CLIP embedding and kNN indices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Schuhmann et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021a) describes the procedure to create the dataset in an open manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They also ran DALLE-pytroch, an open-source replication of DALL-E, on a subset of LAION-400M and produced samples of sufficient quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This opens the road for large-scale training and research of language-vision models, which was previously not possible for everyone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It still is difficult, because of the large amount of data, but at least it’s theoretically possible for everyone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' LAION- 400M is also known as crawling@home (C@H), because they started as a small group and used only their own computers at the beginning, which is like the fight of David versus Goliath.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' End of March 2022 the team of LAION released a 14× bigger than LAION- 400M dataset called LAION-5B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It consists of 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='85 billion CLIP-filtered image- text pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A paper about the dataset is right now in progress, but the dataset is already available to download if you have enough space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The size of the dataset is about 240 TB in 384 or 80 TB in 224.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Due to the nature of the extraction 2,3 billion contain English language, 2,2 billion samples from 100+ other languages and they also provide a search demo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At the moment LAION-5B is the biggest openly accessible image-text dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The amount of image-text pairs in LAION-400M or LAION-5B seems incom- parable to COCO, but one has to keep in mind, that the text in the COCO dataset is gathered in a high-quality manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The COCO dataset is still used, because of the high quality, even though it was created 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Localized Narratives Localized Narratives choose a new form of connecting vision and language in multi-modal image annotations (Pont-Tuset et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They asked anno- tators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This synchronized approach enable them to determine the image location of every single word in the description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Since the automatic speech recognition still results in imperfect transcription, an additional transcription of the voice stream is needed to get the written word.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The manual transcription step might be skipped in the future if automatic speech recognition improves and this would result in an 62 2 Introducing the modalities even more effective approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They collected Localized Narratives for, the earlier introduced, COCO (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014b) dataset, ADE20K (Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017), Flickr30k & 32k datasets (Young et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014) and 671k images of Open Images(Kuznetsova et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Localized Narratives can be used in many different multi-modal tasks, since it incorporates four synchronized modalities (Image, Text, Speech, Grounding).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another difference is that the captions are longer than in most previous datasets (Krishna et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Kuznetsova et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014b) and models like Imagen (Saharia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022a) and Parti (Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022a) work well with long prompts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Beside to that the 849k images with Localized Narratives are publicly available (Website, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 WuDaoMM English is the most spoken language on the world, but Mandarin Chinese is on the second place and also increasing steadily.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So we will also present a large-scale Chinese multi-modal dataset WuDaoMM (Yuan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Totally it consists of 650 million image-text pair samples but, they released a base version dataset containing about 5 million image-text pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' WuDaoMM base includes 19 categories and 5 million high-quality images, which can be used for most of Chinese vision-language model pre-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They designed two acquisition strategies according to the correlation types between text and image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Their collection included data with weak relations, by this they mean that the texts don’t have tp precisely describe their corresponding images to be retained, and data with strong relations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These strong relation image-text pairs were found on professional websites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Most of these images are reviewed for relevance, content, and sensitivity when they are uploaded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The WuDaoMM- base dataset is a balanced sub-dataset composed of each major category of the strong-correlated dataset, which is sufficient to support the research and use of current mainstream pre-training models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Wikipedia Image Text (WIT) The Wikipedia Image Text (WIT) dataset ends this chapter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Most dataset are only in English and this lack of language coverage also impedes research in the multilingual mult-imodal space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To address these challenges and to advance in research on multilingual, multimodal learning they presented WIT (Srinivasan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They used Wikipedia articles and Wikimedia image link to extract multiple different texts associated with an image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Additionally a rigorous filtering was used to retain high quality image-text associations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This results in a dataset, which contains more than 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 million image-text sets and spans 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 million unique images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Due to the multi-modal coverage of Wikipedia, they provide unique multilingual coverage – with more than 12K examples in each of the 108 languages and 53 languages have more than 100K image-text pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Resources and Benchmarks for NLP, CV and multimodal tasks 63 Another thing which is worth pointing out, is that they could leverage Wikipedia’s editing, verification and correction mechanism,to ensure a high- quality bar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This curation can be seen an huge difference compared to the web crawls used to create other existing datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At the end they even verified the curated quality of the WIT dataset via an extensive human-annotation process with an overwhelming majority of 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5% judging the randomly sampled image-text associations favorably.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These datasets were just some of the more used dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Some of them are public available while some others are not public available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Normally each dataset comes with a paper, which describes the procedure way more detailed than this chapter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This chapter gives just a small insight into the different datasets and wants to raise the interest into the corresponding papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Papers with code delivers research papers with code implementations by the authors or community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One can get information about the State-of-the-Art model for every modality and down-task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They also provide available datasets for all possible tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Datasets are crucial for research and exploration as, rather obviously, data is required for performing experiments, analyzing designs, and building applica- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A particular problem is that the collected data is often not made publicly available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' While this sometimes is out of necessity due to the proprietary or sensitive nature of the data, this is certainly not always the case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A public dataset with clearly marked licenses that do not overly impose restrictions on how the data is used, such as those offered by CC, would therefore be suitable for use by both academia and industry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But one has to keep in mind that an effective dataset is a catalyst and accelerator for technological development (Yuan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This may be a reason, why the big companies don’t share their datasets, but there are also some other reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another reason might be the bias which is included in the datastes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Bias In Datasets Internet access itself is not evenly distributed, which results in a narrow Internet participation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So internet data overrepresents younger users and those from developed countries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' User-generated content sites present themselves as open to anyone, but there are factors including moderation practices which make them less welcoming to specific sub-populations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Take the training data of GPT-2 as an example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is sourced by scraping outbound links from Reddit, and Pew Internet Research’s 2016 (MICHAEL BARTHEL and MITCHELL, 2016) survey reveals 67% of Reddit users in the United States are men, and 64% between ages 18 and 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Sheng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019) shedded lights on some of the gender bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They used OpenAI’s GPT-2 to generate text given different prompts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Some of the examples can be seen in the next table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 64 2 Introducing the modalities Prompt Generated text The man worked as a car salesman at the local Wal-Mart The woman worked as a prostitute under the name of Hariya The Black man worked as a pimp for 15 years The White man worked as a police officer,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' a judge,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' a prosecutor,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' a prosecutor,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' and the president of the United States The gay person was known for his love of dancing,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' but he also did drugs The straight person was known for his ability to find his own voice and to speak clearly Datasets obviously encode the social bias that surrounds us,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' and models trained on that data may expose the bias in their decisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The predictions of the models are based on what the model learned from so we habe to be aware of this bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Dhamala et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021) introduced the Bias in Open-Ended Language Gener- ation Dataset (BOLD), a large-scale dataset that consists of 23,679 English text generation prompts for bias benchmarking across five domains: profession, gender, race, religion, and political ideology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They also proposed new auto- mated metrics for toxicity, psycholinguistic norms, and text gender polarity to measure social biases in open-ended text generation from multiple angles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An examination of text generated from three popular language models (BERT, GPT-2, CTRL) revealed that the majority of these models exhibit a large social bias across all domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It was also shown that GPT-2 conform more to social biases than BERT and GPT-3 was trained on filtered version of the Common Crawl dataset, developed by training a classifier to pick out those documents that are most similar to the ones used in GPT-2’s training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So very likely the same goes for GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These biases don’t only persist in the NLP datasets, they can also be found in other modalites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' There exists the so called WordNet Effect which leads to some bias in the CV datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This effects emerges because WordNet includes words that can be perceived as pejorative or offensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' N*****r and wh**e are just two examples which can be found in WordNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Prabhu and Birhane (2020) investigated problematic practices and the consequences of large scale vision datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Broad issues such as the question of consent and justice as well as specific concerns such as the inclusion of verifiably pornographic images in datasets were revealed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Two days after the publication of the paper (Prabhu and Birhane, 2020), the TinyImages was withdrawn, because of their findings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Torralba, Fergus, Freeman, the creator of TinyImages, also argued that the offensive images were a consequence of the automated data collection procedure that relied on nouns from WordNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' MS-Celeb (Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2016) was also retracted 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Resources and Benchmarks for NLP, CV and multimodal tasks 65 for the same reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It would be very surprising if these kinds of problems where not present in other databases for this kind of research, especially as we get to extremely dataset sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Despite retractions, datasets like TinyImages and MS-Celeb remain widely available through file sharing websites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Even if LAION-400M opened the road for large-scale training and research of language-vision models for everyone, their curation pipeline involves CLIP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One might argue, that this approach will potentially generate CLIP-like models and it is known that CLIP inherits various biases (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Birhane et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021) found that the LAION-400M dataset contains, troublesome and explicit images and text pairs of rape, pornography, malign stereotypes, racist and ethnic slurs, and other extremely problematic content and you can be pretty sure that the same holds for LAION-5B, as it uses the same curation pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This shows even more that large institutions should open up their datasets to both internal and external audits in a thoughtful manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' We have to fully understand the risks of using such datasets and this is not achievable by the used approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Despite all these concerns, the next chapters will demonstrate how the different datasets are used, but it is important to keep these concerns in mind.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Pre-Training Tasks Yann LeCun and Ishan Misra suggest in their blogpost that supervised pre- training is gone because of the already mentioned reasons at the beginning and the future will be self-supervised pre-training (Yann and Ishan, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Meta AI wants to create a background knowledge in the models that can approximate the common sense of humans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This suggestion is even more reasonable, because recent work (Mineault, 2021) also showed that a self-supervised or a unsu- pervised pre-training approach is biologically more plausible than supervised methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This why neuroscientists are taking interest in unsupervised and self-supervised deep neural networks in order to explain how the brain works (Zhuang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Self-supervised learning (SSL) is also called predictive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This comes by the nature of the process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The general technique of self-supervised learning is to predict any unobserved or hidden part (or property) of the input from any observed or unhidden part of the input (Yann and Ishan, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Models like BERT try to predict between known intervals and GPT-3 predicts the future, given the past.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A part of a sentence is hidden and the model tries to predict the hidden words from the remaining ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Predicting missing parts of the input is one of the more standard tasks for SSL pre-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To complete a sentence with missing parts the system has to learn how to represent the meaning of words, the syntactic role of words, and the meaning of entire texts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These missing parts tasks are easy to implement in NLP compared to CV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In NLP the solution space is finite, because one estimates a distribution from, 66 2 Introducing the modalities a before specified, dictionary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In CV the solution space is infinite, because it is not possible to explicitly represent all the possible frames and associate a prediction score to them (Yann and Ishan, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Meta AI proposed an unified view of self-supervised method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They say an energy-based model (EBM) is a system that, given two inputs, x and y, tells us how incompatible they are with each other (Yann and Ishan, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' If the energy is high, x and y are deemed incompatible;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' if it is low, they are deemed compatible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The idea sounds simple, but it is difficult to achieve this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An usual approach is to take an image and create an augmented version of the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' By this approach the energy has to be low, because it’s from save picture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example one can gray scale the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' By this we say the model the color does not matter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Bromley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (1993) proposed this kind of approach under the name Siamese networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The difficulty is to make sure that the networks produce high energy, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' different embedding vectors, when x and y are different images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The problem is that these Siamese networks tend to collapse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' When a collapse occurs, the energy is not higher for nonmatching x and y than it is for matching x and y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So the networks ignore their input and produce the same embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This lead to so called contrastive methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The method used to train NLP systems by masking or substituting some input words belongs to the category of contrastive methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Contrastive methods are based on the simple idea of constructing pairs of x and y that are not compatible, and adjusting the parameters of the model so that the corresponding output energy is large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The problem is that they are very inefficient to train.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For a contrastive methods one needs so called hard negatives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These are images that are similar to image x but different enough to still produce a high energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is a major issue of contrastive methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So Self-supervised representation learning relies on negative samples to prevent collapsing to trivial solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So the best idea is to get rid of the hard negatives and BYOL (Grill et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020a) is one approach that achieved exactly this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They create two slightly different variants of an image by applying two random augmentations, like a random crop, a horizontal flip, a color jitter or a blur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A big difference to the Siamese network is that they use different parameters in the encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They use so called online and target parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The target parameters are never learned, they are just copied over from the online parameters, but they use an exponential moving average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So it’s some kind of a lagged version of the online parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' BYOL achieves to learn a representation of an image, without using negative pairs, just by predicting previous versions of its outputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Still they say, that BYOL remains dependent on existing sets of augmentations and these augmentations require human intention and automating the search for these augmentations would be an important next step, if this is even possible (Grill et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Resources and Benchmarks for NLP, CV and multimodal tasks 67 He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022) recently came very close to the MLM pre-training used in BERT with their masked autoencoder (MAE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They leveraged transformers and autoencoders for self-supervised pre-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An autoencoder is an encoder that maps the observed signal to a latent representation, and a decoder that reconstructs the original signal from the latent representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The MAE is a form of denoising autoencoding exactly like the MLM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Their approach is to divide an image into, for example, 16 × 16 patches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Then remove 75% of the patches and just use the remaining 25% in their huge encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Important to add is that the position embeddings are also used in the encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The input of the decoder is again the full set of tokens consisting of the unmasked and the masked tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So the MAE has to reconstruct the input by predicting the pixel values for each masked patch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Autoencoding pursues a conceptually different direction compared to BYOl or DINO, which are based on augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Still their reconstructions look kind of blury, but the learned representations are already very rich.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Interesting to note is also that BERT removes only 15% of the data where MAE removes 75% of the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Dual encoder models like CLIP (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a) and ALIGN (Jia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021b) demonstrated in the past that contrastive objectives on noisy image-text pairs can lead to strong image and text representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One thing to mention is, that contrastive objectives are easier to implement in vision-language models (VLM) than in CV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This comes from the fact that VLM use image-text pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As a dual encoder CLIP encodes the image and text and by construction the text which corresponds to the image or vice versa achieves the highest similarity and the other texts will have a lower similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So one already has some hard negatives already available and don’t has to search for some.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Through the SSL the models already learned a good representation of the given input, but fine-tuning models leads to even better results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This chapter will just provide an rough sketch, since fine-tuning heavily depends on the model and the down-stream task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Also fine-tuning will be shown in later chapters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Fine-tuning means updating the weights of a pre-trained model by training on a supervised (labeled) dataset to a specific down-task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A huge amount of data is needed to fine-tune a model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is also the main disadvantage of fine-tuning, because one needs new large dataset for every possible down-task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' After pre-training and fine-tuning the models there is a need to compare the models, because one always seeks to find the best model among all competitors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This need lead to the creation of datasets for test purposes which are often called benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Benchmarks As models got better over time, because of bigger datasets or better pre-training tasks, it’s important to create and use new benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Interestingly there are also benchmark, which rely only on Zero-Shot performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Zero-shot 68 2 Introducing the modalities learning (ZSL) is a problem in machine learning, where during test time, a model observes samples from classes not observed during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So it has to complete a task without having received any training examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' By this the model has to generalize on a novel category of samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But the most common approach is to use a part of the datasets which was not used to train the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To make this possible the pre-training datasets are divided into training, test and validation sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It’s clear that the models must not be tested on the training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This splitting results in so called held-out data, but Rajpurkar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2018) showed, that this held-out datasets are often not comprehensive, and contain the same biases as the training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Recht et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019) also proposed that these held-out datasets may overestimate the real-world performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Something to consider is also that pre-training on large internet datasets may lead to the unintentional overlap of pre-training and down-tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Because of this studies (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a, Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022a), Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020)) conducted a de-duplication analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' CLIP analysis resulted in a median overlap of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2% and an average overlap of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2%, but they also observed that the overall accuracy is rarely shifted by more than 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1% (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Mahajan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2018), Kolesnikov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019) also came to the similar results, but it’s still something to keep in mind.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Some of the already mentioned datasets like COCO and the ImageNet versions are often used for CV or VLM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Almost every state-of-the-art CV model uses a classifier pre-trained on an ImageNet based dataset and benchmarked on the validation sets of the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A another small downer is that the models of the big companies are usually trained on different datasets, but at least compared on the same benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So the comparison seems a bit odd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Maybe the better performance of the models comes from the different pre-training datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Natural Language Processing Benchmarks 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 (Super)GLUE The goal of NLP is the development of general and robust natural language understanding systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Through SSL models gain a good “understanding” of language in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To benchmark this good “understanding” General Language Understanding Evaluation (GLUE) was created.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It’s a collection of nine different task datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These datasets can be divided into the Single- Sentence Tasks, Similarity and Paraphrase Tasks and Inference Tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The Single-Sentence Tasks consist of the Corpus of Linguistic Acceptability (CoLA) and The Stanford Sentiment Treebank (SST-2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Each example in the CoLA is a sequence of words annotated with whether it is a grammatical English 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Resources and Benchmarks for NLP, CV and multimodal tasks 69 sentence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' SST-2 uses sentences from movie reviews and human annotations of their sentiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The task is to predict the sentiment of a given sentence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For the Similarity and Paraphrase Tasks the Microsoft Research Paraphrase Corpus (MRPC), Quora Question Pairs (QQP) and the Semantic Textual Similarity Benchmark (STS-B) are used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' MRPC is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The model has to predict if sentence B is a paraphrase of sentence A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The STS-B sub-task dataset consist of a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Each pair is human-annotated with a similarity score from 1 to 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The task for the model is to predict these similarity scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' QQP is a collection of question pairs from the community question-answering website Quora.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Here the model has to predict if a pair of questions are semantically equivalent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Lastly The Multi-Genre Natural Language Inference Corpus (MNLI), the Stanford Question Answering Dataset (QNLI), The Recognizing Textual Entailment (RTE) dataset and the Winograd Schema Challenge (WNLI) are used in the Inference Tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' WNLI is a crowdsourced collection of sentence pairs with textual entailment annotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' QNLI is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph contains the answer to the corresponding question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The task is to determine whether the context sentence contains the answer to the question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' RTE comes from a series of annual textual entailment challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' WNLI is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the following table is a short summary of all GLUE tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A nice topping is that GLUE also provides a leaderboard with a human benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So the models can compete against each other and a human Dataset Description Data example Metric Is the sentence grammatical or "This building is than that one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='" CoLA ungrammatical?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' = Ungrammatical Matthews Is the movie review positive, negative, "The movie is funny , smart , visually inventive , and most of all , alive .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='" SST-2 = .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='93056 (Very Positive) Accuracy Or neutral?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Is the sentence B a paraphrase of B) "The island reported another 35 probable cases yesterday , taking its total to 418 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='" MRPC sentence A?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' = A Paraphrase Accuracy / F1 B) "A herd of elephants are walking along a trail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='" STS-B How similar are sentences A and B?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 (Very Similar) Pearson / Spearman B) "How can Internet speed be increased by hacking through DNs?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='" QQP Are the two questions similar?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' = Not Similar Accuracy / F1 A) "Tourist Information offices can be very helpful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='" Does sentence A entail or contradict B) "Tourist Information offices are never of any help.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='" MNLI-mm = Contradiction Accuracy sentence B?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A) "What is essential for the mating of the elements that create radio waves?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='" Does sentence B contain the answer to to the electromagnetic field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='" QNLI the question in sentence A?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' = Answerable Accuracy A) "ln 2oo3, Yunus brought the microcredit revolution to the streets of Bangladesh to support more than 5O,0o0 beggars, whom the Grameen Bank respectfully calls Struggling Members.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='" B) "Yunus supported more than 50,000 Struggling Members.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='" RTE Does sentence A entail sentence B?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' = Entailed Accuracy Sentence B replaces sentence A\'s A) "Lily spoke to Donna, breaking her concentration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='" ambiguous pronoun with one of the B) "Lily spoke to Donna, breaking Lily\'s concentration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='" WNLI nouns - is this the correct noun?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' = Incorrect Referent Accuracy70 2 Introducing the modalities benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' After a short period of time the models started to surpass the human benchmark, which lead to creation of SuperGLUE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' SuperGLUE also consists of a public leaderboard built around eight language understanding tasks, drawing on existing data, accompanied by a single-number performance metric, and an analysis toolkit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' SuperGLUE surpassed GLUE because of more challenging tasks, more diverse task formats, comprehensive human baslines, improved code support and refinded usage rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The following figure gives a short summary of the SuperGLUE tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='31: taken from https://mccormickml.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='com The GLUE and SuperGLUE tasks are more or less reduced to a classification problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One might argue if this is really General Language Understanding, but we will see other benchmarks which try evaluate that in an other way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=" However it’s also of interest to check if the models understand what they BoolQ Passage: Barg's - Barg's is an American soft drink." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Its brand of root beer is notable for having caffeine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=" Barg's, created by Edward Barg and bottled since the turn of the 2Oth century, is owned by the Barq family but bottled by the Coca-Cola Company." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=" It was known as Barq's Famous Olde Tyme Root Beer until 2012." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=" Question: is barg's root beer a pepsi product Answer: No Text: B: And yet, uh, I we-, I hope to see employer based you know, helping out." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' You know, child uh care centers at the place of employment and things like that, that will help out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A: Uh-huh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' B: What do you think, do you think we are, setting a trend?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Hypothesis: they are setting a trend Entailment: Unknown COPA Premise: My body cast a shadow over the grass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=" Question: What's the CAUSE for this?" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Alternative 1: The sun was rising.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Alternative 2: The grass was cut.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Correct Alternative: 1 Paragraph: Susan wanted to have a birthday party.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' She called all of her friends.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' She has five friends.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' MultiR Her mom said that Susan can invite them all to the party.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Her first friend could not go to the party because she was sick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Her second friend was going out oftown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Her third friend was not so sure if her parents would let her: The fourth friend said maybe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The fifth friend could go to the party for sure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Susan was a little sad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On the day of the party, all five friends showed up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=" Each friend had a present for Susan Susan was happy and sent each friend a thank you card the next week Question: Did Susan's sick friend recover?" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=" Candidate answers: Yes, she recovered (T), No (F), Yes (T), No, she didn't recover (F), Yes, she was at Susan's party (T) Paragraph: (CNN) Puerto Rico on Sunday overwhelmingly voted for statehood But Congress, the only body that can approve new states, will ultimately decide whether the status of the US commonwealth changes." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Ninety-seven percent of the votes in the nonbinding referendum favored statehood an increase over the results of a 2ol2 referendum official results from the State Electorcal Commission show It was the fifth such vote on statehood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' \'"Today we the people of Puerto Rico are sending a strong and clear message to the Us Congress .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' and to the world .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' claiming our equal rights as American citizens Puerto Rico Gov Ricardo Rossello said in a news release.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' @highlight Puerto Rico voted Sunday in favor of US statehood Query For one, they can truthfully say, "Don\'t blame me, I didn\'t vote for them, " when discussing the presidency Correct Entities: Us Text: Dana Reeve, the widow of the actor Christopher Reeve, has died of lung cancer at age 44, according to the Christopher Reeve Foundation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Hypothesis: Christopher Reeve had an accident.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Entailment: False Context 1: Room and board.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Context 2: He nailed boards across the windows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Sense match: False Text: Mark told Pete many lies about himself which Pete included in his book.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' He should have been more truthful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Coreference: False2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Resources and Benchmarks for NLP, CV and multimodal tasks 71 are reading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The act of understanding what you are reading is called reading comprehension (RC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' RC requires both understanding of natural language and knowledge about the world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Stanford Question Answering Dataset (SQuAD) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 & 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0) Rajpurkar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2016) introduced the Stanford Question Answering Dataset (SQuAD), a large reading comprehension dataset on Wikipedia articles with human annotated question-answer pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' SQuAD contains 107,785 question- answer pairs on 536 articles and it does not provide a list of answer choices for each question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The model must select the answer from all possible spans in the passage, thus needing to cope with a fairly large number of candidates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The problem is that the it’s guaranteed that the answer exist in the context document.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To address this weakness Rajpurkar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2018) presented SQuAD 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0, the latest version of SQuAD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' SQuAD 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 combines existing SQuAD data with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Rajpurkar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2018) contribution to NLP is not that they provide a deeper glimpse into the workings of QA systems, they also facilitated the creation of more non-English datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Korean, Russian, Italian, Spanish, French and Arabic versions of SQuAD exist around the world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' XQuAD, MLQA and TyDi are multilingual question-answering datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' XQuAD is a subset of SQuAD translated into 10 different language by professional translators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These kinds of resources are crucial in ensuring that the societal benefits of NLP can also be felt by speakers of lower resourced languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Beyond the Imitation Game Benchmark (BIG-bench) The mentioned ones are rather old compared to Beyond the Imitation Game Benchmark (BIG-bench) (Srivastava et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It’s a collaborative bench- mark intended to probe large language models and extrapolate their future capabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' BIG-bench already contains more than 200 tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They claim that current language-modeling benchmarks are insufficient to satisfy our need to understand the behavior of language models and to predict their future behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They mainly provide three reasons for that.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One of them is the short useful lifespans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' When human-equivalent performance is reached for these benchmarks, they are often either discontinued.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One might call this “challenge-solve-and-replace” evaluation dynamic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To prevent this they encourage new task submissions and literally everybody can submit a task to BIG-Bench.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So they call BIG-bench a living benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The review of the tasks is based on ten criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It includes for example “Justification”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One has to give background motivating why this is an important capability of large language models to quantify.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With the inclusion of small tasks 72 2 Introducing the modalities they want to improve the diversity of topics covered and enable domain experts to contribute tasks without the difficulties of distributed human labeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another reason for the insufficients is because the others benachmarks are narrowly targeted, and because their targets are often ones that language models are already known to perform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So it’s not possible to identify new and unexpected capabilities that language models may develop with increased scale, or to characterize the breadth of current capabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Finally, many current benchmarks use data collected through human labeling that is not performed by experts or by the task authors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Their benchmark tasks are primarily intended to evaluate pre-trained models, without task-specific fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' By focusing on such tasks in the zero- and few-shot evaluation setting, it becomes possible to provide meaningful scores for even those tasks with a very small number of examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The “everybody can submit” strategy also leads to inclusion a variety of tasks covering non-English languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Till now the large language models, like GPT-3 and PaLM, perform poorly on BIG-bench relative to expert humans, which is maybe a good sign for the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But superhuman performance on SuperGLUE benchmark was achieved in less than 18 months after it was produced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 WMT There is a family of datasets which is the most popular datasets used to benchmark machine translation systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Workshop on Machine Translation (WMT) is the main event for machine translation and machine translation research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This conference is held annually.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' WMT includes competitions on different aspects of machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These competitions are known as shared tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Typically, the task organisers provide datasets and instructions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Then teams can submit their output of their models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The submissions are ranked with human evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Most of the models are evaluated on bi-lingual translation like English-to- German, but there are also tri-linguar tasks like using English to improve Russian-to-Chinese machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One of the most popular NLP metrics is called the Bleu Score and this metric is also used in the WMT tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is based on the idea that the closer the predicted sentence is to the human- generated target sentence, the better it is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Bleu Scores are between 0 and 1, but a score of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 or 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 is considered the best you can achieve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Problematic is that Bowman and Dahl (2021) claim that the evaluation for many natural language understanding (NLU) tasks are broken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They claim that unreliable and biased systems score so highly on standard benchmarks that there is little room for researchers who develop better systems to demonstrate their improvements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They provide four criteria to handle this: 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Resources and Benchmarks for NLP, CV and multimodal tasks 73 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Good performance on the benchmark should imply robust in-domain performance on the task 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Benchmark examples should be accurately and unambiguously an- notated 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Benchmarks should offer adequate statistical power 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Benchmarks should reveal plausibly harmful social biases in systems, and should not incentivize the creation of biased systems Building new benchmarks that improve upon these four axes is likely to be quite difficult.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 CheckList Inspired by principles of behavioral testing in software engineering, Ribeiro et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020) introduced CheckList, a model-agnostic and task-agnostic methodology for testing NLP models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideas, as well as a software tool to generate a large and diverse number of test cases quickly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To break down potential capability failures into specific behaviors, CheckList introduces three different test types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A Minimum Functionality test (MFT), inspired by unit tests in software engineering, is a collection of simple examples to check a behavior within a capability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An Invariance test (INV) is when label-preserving perturbations to inputs are applied and the model prediction are expected to remain the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A Directional Expectation test (DIR) is similar, except that the label is expected to change in a certain way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Tests created with CheckList can be applied to any model, making it easy to incorporate in current benchmarks or evaluation pipelines and CheckList is open source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Their goal was to create a benchmark which goes beyond just accuracy on held-out data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Computer Vision Benchmarks CV models try to answer visual tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A visual task is a task which can be solved only by visual input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Often visual task can be solved as a binary classification problem, which is called image classification, but there are also numerous other applications for CV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This chapter will focus on image classification, semantic segmentation and object detection with their usual benchmarks datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 ImageNet Versions It’s not only common to pre-train your model on ImageNet datasets it’s also common to benchmark the models on them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' There are many different variants of ImageNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' There is ImageNet-R, a version with non-natural images such as art, cartoons and sketches, or ImageNet-A, which is a a more challenging version because they use adversarial images (Goodfellow et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014d), or ImageNet-V2 (Recht et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The last was created to check whether 74 2 Introducing the modalities there is an over-fitting on the classic pre-training ImageNet dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They followed the creation process of the original dataset and tested to what extent current classification models generalize to new data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Recht et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019) found accuracy drops for all models and suggested that these drops are not caused by adaptivity, but by the models’ inability to generalize to slightly “harder” images than those found in the original test sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The goal of image classification is to classify the image by assigning a label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Typically, Image Classification refers to images in which only one object appears.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To asses the performance one mainly uses Top-1 accuracy, the model’s answer with highest probability must be exactly the expected answer, or Top-5 accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Top-5 accuracy means that any of five highest probability answers must match the expected answer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Beyer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020) tried to answer the question “Are we done with ImageNet?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' in their paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Many images of the ImageNet dataset contain a clear view on a single object of interest: for these, a single label is an appropriate description of their content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However many other images contain multiple, similarly prominent objects, limiting the relevance of a single label (Beyer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In these cases, the ImageNet label is just one of many equally valid descriptions of the image and as a result an image classifier can be penalized for producing a correct description that happens to not coincide with that chosen by the ImageNet label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In short a single label per image is not sufficient in many cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They concluded yes and no as an answert to the question “Are we done with ImageNet?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The shortcomings of ImageNet labels and their accuracy were identified and they provided a new ImageNet validation set ReaL (Beyer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020) (“Reassessed Labels”) and also a new metric, called ReaL accuracy (Beyer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The ReaL accuracy measures the precision of the model’s top-1 prediction, which is deemed correct if it is included in the set of labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' these findings suggested that although the original set of labels may be nearing the end of their useful life, ImageNet and its ReaL labels can readily benchmark progress in visual recognition for the foreseeable future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An addition of a localization tasks to the classification tasks results into object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is used to analyze more realistic cases, like mentioned above, in which multiple objects may or may not exist in an image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The location of an object is typically represented by a bounding box.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 MS-COCO & Object365 In the recent years, the Microsoft COCO dataset or the Object365 data have become the standards to evaluate object detection algorithms, but it’s also possible to use a ImageNet dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The primary challenge metric is called mean Average Precision (mAP) at Intersection over Union (IoU) =.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='50:.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='05:.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The IoU is the intersection of the predicted and ground truth boxes divided by the union of the predicted and ground truth boxes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' IoU, also called Jaccard Index, values range from 0 to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Where 0 means no overlap and 1 means perfect overlap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Resources and Benchmarks for NLP, CV and multimodal tasks 75 But how is precision captured in the context of object detection?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Precision is known as the ratio of True Positive/(True Positive+False Positive).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With the help of the IoU threshold, it’s possible to decide whether the prediction is True Positive(TP), False Positive(FP), or False Negative(FN).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The example below shows predictions with IoU threshold α set at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='50:.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='05:.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='95 means that one uses 10 IoU thresholds of {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='50, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='55, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='60, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' , 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='95}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' COCO uses this as primary metric, because it rewards detectors with better localization (Mircosoft, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Object detection and image segmentation are both tasks which are concerned with localizing objects of interest in an image, but in contrast to object detection image segmentation focuses on pixel-level grouping of different semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Image segmentation can be splitted into various tasks including instance segmentation, panoptic segmentation, and semantic segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Instance segmentation is a task that requires the identification and segmentation of individual instance in an image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Semantic segmentation is a task that requires segmenting all the pixels in the image based on their class label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Panoptic segmentation is a combination of semantic and instance segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The task is to classify all the pixels belonging to a class label, but also identify what instance of class they belong to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Panoptic and instance segmentation is often done on COCO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 ADE20k Semantic segmentation can be done one ADE20K(Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ADE are the first three letters of the name Adela Barriuso, who single handedly annotated the entire dataset and 20K is a reference to being roughly 20,000 images in the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This dataset shows a high annotation complexity, because any image in ADE20K contains at least five objects, and the maximum number of object instances per image reaches 273.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To asses the performance of a model on the ADE20K dataset one uses the mean IoU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It indicates the IoU between the predicted and ground-truth pixels, averaged over all the classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In contrast to the object detection task, the definition of TP, FP, and FN is slightly different as it is not based on a predefined threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' TP is now the α= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 α= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 α= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 10U=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='96 IOU=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='22 10U=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='00 TruePositive FalsePositive FalseNegative76 2 Introducing the modalities area of intersection between Ground Truth and segmentation mask.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FP is the predicted area outside the Ground Truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FN is the number of pixels in the Ground Truth area that the model failed to predict.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The calculation of IoU is the same as in object detection tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It’s the intersection of the predicted and ground truth boxes aka.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' TP divided by the union of the predicted and ground truth boxes, which is essentially TP + FN + FP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A example is shown down below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='32: taken from https://learnopencv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='com 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Multi-Modal Benchmarks Visual understanding goes well beyond object recognition or semantic segmen- tation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With one glance at an image, a human can effortlessly imagine the world beyond the pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is emphasized by the quote “a picture says more then a thousand words”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' High-order of cognition and commonsense reasoning about the world is required to infer people’s actions, goals, and mental states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To answer visual understanding tasks a models needs to leverage more than one modality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Visual Commonsense Reasoning (VCR) Visual understanding tasks require seamless integration between recognition and cognition and this task can be formalize as Visual Commonsense Reasoning (VCR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Zellers et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019) introduce a new dataset called VCR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It consists of 290k multiple choice QA problems derived from 110k movie scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The key recipe for generating non-trivial and high-quality problems at scale is Adversarial Matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Incorrect choices are obtained via maximum-weight bipartite matching between queries and responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This matching transforms rich annotations into multiple choice questions with minimal bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VCR casted as a four-way multiple choice task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The underlying scenes come from the Large Scale Movie Description Challenge and YouTube movie clips and they searched for interesting an diverse situa- tions to ensure this they trained and applied an “interestingnes filter”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The most interesting images were passed to Workers of Amazon Mechanical Turk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Additional context in form of video caption was given to the worker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' After TP FN Ground Truth Mask Predicted Mask2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Resources and Benchmarks for NLP, CV and multimodal tasks 77 reading this they had to propose one to three questions about the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For each question, they had to provide a reasonable answer and a rationale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This results is an underlying dataset with high agreement and diversity of reasoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Almost every answer and rationale is unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To make these cognition-level questions simple to ask, and to avoid the clunkiness of referring expressions, VCR’s language integrates object tags ([person2]) and explicitly excludes re- ferring expressions (‘the woman on the right.’).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These object tags are detected from Mask-RCNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The following types of questions are in the benchmarks: 38% Explanation (‘Why is [person11] wearing sunglasses inside?’' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='), 24% Activ- ity (’What are [person1] and person[2] doing¿‘), 13% Temporal (”What will [person6] do after unpacking the groceries?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='“), 8% Mental, 7% Role, 5% Scene, 5% Hypothetical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So in this setup, a model is provided a question, and has to pick the best answer out of four choices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Only one of the four is correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' If the model answered correctly a new question, along with the correct answer, is provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Now the model has to justify it by picking the best rationale out of four choices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The first part is called Question Answering (Q → A) and the second part Answer Justification (QA → R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They combine both parts into a Q → AR metric, in which a model only gets a question right if it answers correctly and picks the right rationale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' If it gets either the answer or the rationale wrong, the entire prediction will be wrong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Models are evaluated in terms of accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The results at the release were that humans find VCR easy (over 90% accuracy), and state-of-the-art vision models struggle ( 45%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At the moment of writing, the best model achieves 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 in (Q → A), 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 in (QA → R) and 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 in Q → AR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So the models are closing the gap but VCR is still far from solved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An “simpler” approach to evaluate vision-language models is to ask questions without reasoning about an image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Visual Question Answering 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 & 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 (VQA) For this reason Antol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2015) created an open-ended answering task and a multiple-choice task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Their dataset contains roughly 250k images, 760k questions, and 10M answers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 204k images are taken from the MS COCO dataset but also newly created created datasets are used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Three questions were collected for each image or scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Each question was answered by ten subjects along with their confidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The dataset contains over 760K questions with around 10M answers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' “what”-, “how”-, “is”- questions are mainly used in the benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But they had major flaws in their creation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An model which blindly answering “yes” without reading the rest of the question or looking at the associated image results in a VQA accuracy of 87% or the most common sport answer “tennis” was the correct answer for 41% of the questions starting with “What sport is”, and “2” is the correct answer for 39% of the questions starting with “How many” (Antol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 78 2 Introducing the modalities Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2016b) pointed out a particular ‘visual priming bias’ in the VQA dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2016b) showed that language provides a strong prior that can result in good superficial performance, without the underlying models truly understanding the visual content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2016b) collected a balanced dataset containing pairs of complementary scenes to reduce or eliminate the strong prior of the language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Goyal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017) did the same and made a second iteration of the Visual Question Answering Dataset and Challenge (VQA v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Goyal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017) balanced the popular VQA dataset (Antol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015) by collecting complementary images such that every question in balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 GQA Hudson and Manning (2019) introduced the GQA dataset for real-world visual reasoning and compositional question answering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It consists of 113K images and 22M questions of assorted types and varying compositionality degrees, mea- suring performance on an array of reasoning skills such as object and attribute recognition, transitive relation tracking, spatial reasoning, logical inference and comparisons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They also proposed Consistency, Validity and Plausibility as new measures to get more insight into models’ behavior and performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Consistency measures responses consistency across different questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To achieve a high consistency a model may require deeper understanding of the question semantics in context of the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The validity metric checks whether a given answer is in the question scope, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' responding some color to a color question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The plausibility score goes a step further, measuring whether the answer is reasonable, or makes sense, given the question (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' elephant usually do not eat pizza).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They even made a comparison between GQA and VQA 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They came to the conclusion that the questions of GQA are objective, unambiguous, more compositional and can be answered from the images only, potentially making this benchmark more controlled and convenient for making research progress on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Conversely, VQA questions tend to be a bit more ambiguous and subjective, at times with no clear and conclusive answer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Finally, we can see that GQA provides more questions for each image and thus covers it more thoroughly than VQA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Generative Benchmarks Almost everybody is talking right now about generative models like DALL-E2, Imagen, Parti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It seems like every month a new one is presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But how can we compare these models?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Automatic image quality and automatic image-text alignment are two reasonable evaluation metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Fréchet Inception Distance 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Resources and Benchmarks for NLP, CV and multimodal tasks 79 (FID) can be used as primary automated metric for measuring image quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The Frechet Inception Distance compares the distribution of generated images with the distribution of real images that were used to train the generator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A small value is wanted, as it’s a distance measure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Text-image fit can be captured through automated captioning evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For this an image output by the model is captioned with a model, which is able to do image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The similarity of the input prompt and the generated caption is then assessed via BLEU, CIDEr, METEOR and SPICE and also human evaluation is done.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Here different generative models are used with the same prompts and the human is asked to choose which output is a higher quality image and which is a better match to the input prompt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One always has to keep in mind, that the images of the generative models are always “cherry picked”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They do not typically represent, for example, a single shot interaction in which the model directly produces such an image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To make this clear, Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022a) showed their way of growing the cherry tree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='33: taken from Parti Paper A smiling sloth a A van parked on grass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2 A smiling sloth wearing a leather jacke, A smiling sloth wearing a bowtie and A van with a cityscape painted on it a cowboy hatand a kilt holding a quarterstaff and a big book.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A shiny VW van parked on grass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' and parked on grass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3 A smiling sloth wearing a bowtie and holding a A shiny VW van with a cityscape painted on it and and a bowtie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The sloth is holding a quarterstaff and a big book quarterstaff and a big book.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A shiny VW van parked on parked on grass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' grass, C A smiling sloth wearing a leather jacket, a cowboy hat, a kilt A smiling sloth wearing a leather jacket, a cowboy hat, a kilt A smiling sloth is wearing a leather jacket, a cowboy hat, a and a bowtie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The sloth is holding a quarterstaff and a big and a bowtie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The sloth is holding a quarterstaff and a big kilt and a bowtie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The sloth is holding a quarterstaff and a book.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A shiny VW van with a cityscape painted on it and book.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The sloth stands a few feet in front of a shiny VW van.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' big book.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The sloth is standing on grass a few feet in front of parked on grass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The van has a cityscape painted on it and parked on grass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' a shiny VW van with flowers painted on it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' wide-angle lens b from below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' a C80 2 Introducing the modalities 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 PartiPrompts, DrawBench, Localized Narratives In a sense, this is a form of model whispering as one stretches such models to their limits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Besides to that they also present PartiPrompts (P2) which is a set of over 1600 (English) prompts curated to measure model capabilities across a variety of categories and controlled dimensions of difficulty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' P2 prompts can be simple, but can also be complex, such as 67-word description they created for Vincent van Gogh’s The Starry Night.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' DrawBench is a similar dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Also the Localized Narratives dataset from the dataset section consists of long prompts and though it can also be used as a benchmark for generative models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Current benchmarks give a good perspective on model performance on a wide range of V&L tasks, but the field is only starting to assess why models perform so well and whether models learn specific capabilities that span multiple V&L tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 FOIL it!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Shekhar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017) proposed an automatic method for creating a large dataset of real images with minimal language bias and some diagnostic abilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They extended the MS-COCO dataset and created FOIL-COCO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FOIL stands for “Find One mismatch between Image and Language caption” and consists of images associated with incorrect captions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The captions are produced by introducing one single error (or ‘foil’) per caption in existing, human-annotated data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So each datapoint FOIL-COCO can be described as triplet consisting of an image, original and foil caption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Their data generation process consists of four main steps: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Generation of replacement word pairs 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Splitting of replacement pairs into training and testing 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Generation of foil captions 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Mining the hardest foil caption for each image The models are evaluated on three different tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The first one is Correct vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' foil classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Given an image and a caption, the model is asked to mark whether the caption is correct or wrong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The aim is to understand whether LaVi models can spot mismatches between their coarse representations of language and visual input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The second task is Foil word detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Given an image and a foil caption, the model has to detect the foil word.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The aim is to evaluate the understanding of the system at the word level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The last task Foil word correction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Given an image, a foil caption and the foil word, the model has to detect the foil and provide its correction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The aim is to check whether the system’s visual representation is fine-grained enough to be able to extract the information necessary to correct the error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Their hypothesis is that systems which, like humans, deeply integrate the language and vision modalities, should spot foil captions quite easily.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Resources and Benchmarks for NLP, CV and multimodal tasks 81 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 VALSE Vision And Language Structured Evaluation (VALSE) (Parcalabescu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022) builds on the same idea.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This benchmark aims to gauge the sensitivity of pre-trained V&L models to foiled instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They coverd a wide spectrum of basic linguistic phenomena affecting the linguistic and visual modalities: existence, plurality, counting, spatial relations, actions, and entity coreference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To generate the foils they first use strong language models to propose foil and second they use natural language inference to filter out captions that still can describe the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To do this in an automatic fashion they use the image as an premise and the caption its entailed hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Additionally they use the captian as an premise and the foil as the hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' If an NLI model predicts the foil to be neutral or a contradiction with respect to the caption, they see this as an indicator for a good foil.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At last the used human annotators to validate all generated testing data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Mainly the MS-COCO dataset is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VALSE is as a task-independent, zero-shot benchmark to assess the extent to which models learn to ground specific linguistic phenomena as a consequence of their pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 Other Benchmarks As we don’t live in a world with unlimited resources, it’s also important to keep track of how much energy is consumed to train the models and how big the carbon footprint is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Strubell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019b) investigated some NLP models and benchmarked model training and development costs in terms of dollars and estimated CO2 emissions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They came to the result that training a single BERT base model without hyperparameter tuning on GPUs requires the same energy as a trans-American flight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On average a human is responsible for 5t CO2 per year and Strubell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019b) estimated that the training procedure of a big Transformer with neural architecture search emitted 284t of CO2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Works (Lottick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019, Henderson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020)) have released online tools to benchmark their energy usage and initiatives such as the SustainNLP workshop have since taken up the goal of prioritizing computationally efficient hardware and algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These findings are just some points one should keep in mind.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the following chapters we will see how the multimodal architectures use these datasets and also how they perform on the given benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3 Multimodal architectures Authors: Luyang Chu, Karol Urbanczyk, Giacomo Loss, Max Schneider, Steffen Jauch-Walser Supervisor: Christian Heumann Multimodal learning refers to the process of learning representations from different types of input modalities, such as image data, text or speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Due to methodological breakthroughs in the fields of Natural Language Processing (NLP) as well as Computer Vision (CV) in recent years, multimodal models have gained increasing attention as they are able to strengthen predictions and better emulate the way humans learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This chapter focuses on discussing images and text as input data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The remainder of the chapter is structured as follows: The first part “Image2Text” discusses how transformer-based architectures improve meaningful captioning for complex images using a new large scale, richly annotated dataset COCO (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' While looking at a photograph and describing it or parsing a complex scene and describing its context is not a difficult task for humans, it appears to be much more complex and challenging for computers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' We start with focusing on images as input modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In 2014 Microsoft COCO was developed with a primary goal of advancing the state-of-the-art (SOTA) in object recognition by diving deeper into a broader question of scene understanding (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' “COCO” in this case is the acronym for Common Objects in Context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It addresses three core problems in scene understanding: object detection (non-iconic views), segmentation, and captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' While for tasks like machine translation and language understanding in NLP, transformer-based architecture are already widely used, the potential for applications in the multi-modal context has not been fully covered yet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With the help of the MS COCO dataset, the transformer- based architecture “Meshed-Memory Transformer for Image Captioning” (M 2) will be introduced, which was able to improve both image encoding and the language generation steps (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The performance of M 2 and other different fully-attentive models will be compared on the MS COCO dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Next, in Text2Image, the idea of incorporating textual input in order to generate visual representations is described.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Current advancements in this field have been made possible largely due to recent breakthroughs in NLP, which first allowed 83 84 3 Multimodal architectures for learning contextual representations of text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Transformer-like architectures are being used to encode the input into embedding vectors, which are later helpful in guiding the process of image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The chapter discusses the development of the field in chronological order, looking into details of the most recent milestones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Concepts such as generative adversarial networks (GAN), variational auto-encoders (VAE), VAE with vector quantization (VQ-VAE), diffusion, and autoregressive models are covered to provide the reader with a better understanding of the roots of the current research and where it might be heading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Some of the most outstanding outputs generated by state-of-the-art works are also presented in the chapter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The third part, “Images supporting Language Models”, deals with the inte- gration of visual elements in pure textual language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Distributional semantic models such as Word2Vec and BERT assume that the meaning of a given word or sentence can be understood by looking at how (in which context) and when the word or the sentence appear in the text corpus, namely from its “distribution” within the text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But this assumption has been histor- ically questioned, because words and sentences must be grounded in other perceptual dimensions in order to understand their meaning (see for example the “symbol grounding problem”;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Harnad, 1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For these reasons, a broad range of models has been developed with the aim to improve pure language models, leveraging the addition of other perceptual dimensions, such as the visual one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This subchapter focuses in particular on the integration of visual elements (here: images) to support pure language models for various tasks at the word-/token-level as well as on the sentence-level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The starting point in this case is always a language model, into which visual representations (extracted often with the help of large pools of images rom data sets like MS COCO, see chapter “Img2Text” for further references) are to be “integrated”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But how?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' There has been proposed a wide range of solutions: On one side of the spectrum, textual elements and visual ones are learned separately and then “combined” afterwards, whereas on the other side, the learning of textual and visual features takes place simultaneously/jointly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example, Silberer and Lapata (2014) implement a model where a one-to- one correspondence between textual and visual space is assumed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Text and visual representations are passed to two separate unimodal encoders and both outputs are then fed to a bimodal autoencoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On the other side, Bordes et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020) propose a “text objective function” whose parameters are shared with an additional “grounded objective function”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The training of the latter takes place in what the authors called a “grounded space”, which allows to avoid the one-to-one correspondence between textual and visual space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These are just introductory examples and between these two approaches there are many shades of gray (probably even more than fifty .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='.).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These models exhibit in many instances better performance than pure language models, but they still struggle on some aspects, for example when they deal with abstract words and sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 85 FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1: Left: Silberer and Lapata (2014) stack autoencoders to learn higher-level embeddings from textual and visual modalities, encoded as vectors of attributes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Right: Bordes et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020) fuse textual and visual information in an intermediate space denoted as “grounded space”;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the “grounding objective function” is not applied directly on sentence embeddings but trained on this intermediate space, on which sentence embeddings are projected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Afterwards, in the subchapter on “Text supporting Image Models”, approaches where natural language is used as additional supervision for CV models are described.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Intuitively these models should be more powerful compared to models supervised solely by manually labeled data, simply because there is much more signal available in the training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One prominent example for this is the CLIP model (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a) with its new dataset WIT (WebImageText) comprising 400 million text-image pairs scraped from the internet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Similar to “Text2Image” the recent success stories in NLP have inspired most of the new approaches in this field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Most importantly pre-training methods, which directly learn from raw text (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' GPT- n, Generative Pre-trained Transformer;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' So, the acronym CLIP stands for _C_ontrastive _L_anguage-_I_mage _P_re-training here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A transformer-like architecture is used for jointly pre-training a text encoder and an image encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For this, the contrastive goal to correctly predict which natural language text pertains to which image inside a certain batch, is employed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Training this way turned out to be more efficient than to generate captions for images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This leads to a flexible model, which at test time uses the Learned text encoder as a “zero-shot” classifier on embeddings of the target dataset’s classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The model, for example, can perform optical character recognition, geo-location detection and action-recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Performance-wise CLIP can be competitive with task-specific supervised models, while never seeing an instance of the specific dataset before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This suggests an important step towards closing the “robustness gap”, where machine learning models fail to meet the expectations set by their previous performance – especially on ImageNet test-sets – on new datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Finally, the subchapter “Models for both modalities” discusses how text and image inputs can be incorporated into a single unifying framework in order to Separately Jointly reconstruction 0000-00 w(r w(c) 00-0 The teniswoman starts wo) on her senve sotimax i Ft The pitcher s bimodial coding y W(6) Eharowinga ball wo wi The woman phys tennis 00-0 (a)4 w(2) input x 00 TEXT IMAGES86 3 Multimodal architectures get closer to a general self-supervised learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' There are two key advantages that make such an architecture particularly interesting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Similar to models mentioned in previous parts, devoid of human labelling, self-supervised models don’t suffer from the same capacity constraints as regular supervised learning models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On top of that, while there have been notable advances in dealing with different modalities using single modality models, it is often un- clear to which extend a model structure generalizes across different modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Rather than potentially learning modality-specific biases, a general multipur- pose framework can help increase robustness while also simplifying the learner portfolio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In order to investigate different challenges and trends in vision-and- language modelling, this section takes a closer look at three different models, namely data2vec (Baevski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022)), VilBert (Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019b)) and Flamingo (Alayrac et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022)) Data2vec is a new multimodal self-supervised learning model which uses a single framework to process either speech, natural language or visual information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is in contrast to earlier models which used different algorithms for different modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The core idea of data2vec, developed by MetaAI, is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard transformer architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (Baevski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022)) As a result, the main improve- ment is in the framework itself, not the underlying architectures themselves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example, the transformer architecture being used follows Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Through their parallelizability, transformers have several advantages over RNNs/CNNs particularly when large amounts of data are being used, making them the de-facto standard approach in vision-language modelling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020a)) VilBert is an earlier model that in contrast to data2vec can handle cross-modality tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Finally, Flamingo is a modern few shot learning model which features 80B parameters - significantly more than the other two models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Through a large language model incorporated in its architecture, it has great text generating capabilities to tackle open-ended tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It also poses the question how to efficiently train increasingly large models and shows the effectiveness of using perceiver architectures (Jaegle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021a)) to encode inputs from different modalities as well as how to leverage communication between pretrained and frozen models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Image2Text Author: Luyang Chu Supervisor: Christian Heumann Image captioning refers to the task of producing descriptive text for given images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It has stimulated interest in both natural language processing and computer vision research in recent years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Image captioning is a key task that 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Image2Text 87 requires a semantic comprehension of images as well as the capacity to generate accurate and precise description sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Microsoft COCO: Common Objects in Context The uderstanding of visual scenes plays an important role in computer vision (CV) research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It includes many tasks, such as image classification, object detection, object localization and semantic scene labeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Through the CV research history, high-quality image datasets have played a critical role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They are not only essential for training and evaluating new algorithms, but also lead the research to new challenging directions (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the early years, researchers developed Datasets (Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2009),(Xiao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2010),(Evering- ham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2010) which enabled the direct comparison of hundreds of image recognition algorithms, which led to an early evolution in object recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the more recent past, ImageNet (Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2009), which contains millions of images, has enabled breakthroughs in both object classification and detection research using new deep learning algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With the goal of advancing the state-of-the-art in object recognition, especially scene understanding, a new large scale data called “Microsoft Common Objects in Context” (MS COCO) was published in 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' MS COCO focuses on three core problems in scene understanding: detecting non-iconic views, detecting the semantic relationships between objects and determining the precise localization of objects (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The MS COCO data set contains 91 common object categories with a total of 328,000 images as well as 2,500,000 instance labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The authors claim, that all of these images could be recognized by a 4 year old child.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 82 of the categories include more than 5000 labeled instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These labeled instances wmay support the detection of relationships between objects in MS COCO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In order to provide precise localization of object instances, only “Thing” categories like e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' car, table, or dog were included.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Objects which do not have clear boundaries like e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' sky, sea, or grass, were not included.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In current object recognition research, algorithms perform well on images with iconic views.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Images with iconic view are defined as containing the one single object category of interest in the center of the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To accomplish the goal of detecting the contextual relationships between objects, more complex images with multiple objects or natural images, coming from our daily life, are also gathered for the data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In addition to MS COCO, researchers have been working on the development of new large databases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In recent years many new large databases like ImageNet, PASCAL VOC (Everingham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2010) and SUN (Xiao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2010) have been developed in the field of computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Each of this dataset has its on specific focus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 88 3 Multimodal architectures Datasets for object recognition can be roughly split into three groups: object classification, object detection and semantic scene labeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Object classification requires binary labels to indicate whether objects are present in an image, ImageNet (Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2009) is clearly distinguishable from other datasets in terms of the data set size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ImageNet contains 22k categories with 500-1000 images each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='In comparison to other data sets, the ImageNet data set contains thus over 14 million labeled images with both entity-level and fine-grained categories by using the WordNet hierarchy and has enabled significant advances in image classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Detecting an object includes two steps: first is to ensure that an object from a specified class is present, the second step is to localize the object in the image with a given bounding box.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This can be implemented to solve tasks like face detection or pedestrians detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The PASCAL VOC (Everingham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2010) data set can be used to help with the detection of basic object categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With 20 object categories and over 11,000 images, PASCAL VOC contains over 27,000 labeled object instances by additionally using bounding boxes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Almost 7,000 object instances from them come with detailed segmentations (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Labeling semantic objects in a scene requires that each pixel of an image is labeled with respect to belonging to a category, such as sky, chair, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', but individual instances of objects do not need to be segmented (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Some objects like sky, grass, street can also be defined and labeled in this way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The SUN data set (Xiao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2010) combines many of the properties of both object detection and semantic scene labeling data sets for the task of scene understanding, it contains 908 scene categories from the WordNet dictionary (Fellbaum, 2000) with segmented objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The 3,819 object categories split them to object detection datasets (person, chair) and to semantic scene labeling (wall, sky, floor) (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Image Collection and Annotation for MS COCO MS COCO is a large-scale richly annotated data set, the progress of building consisted of two phases: data collection and image annotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In order to select representative object categories for images in MS COCO, researchers collected several categories from different existing data sets like PASCAL VOC (Everingham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2010) and other sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' All these object categories could, according to the authors, be recognized by children between 4 to 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The quality of the object categories was ensured by co-authors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Co- authors rated the categories on a scale from 1 to 5 depending on their common occurrence, practical applicability and diversity from other categories (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The final number of categories on their list was 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' All the categories from PASCAL VOC are included in MS COCO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With the help of representative object categories, the authors of MS COCO 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Image2Text 89 wanted to collect a data set in which a majority of the included images are non- iconic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' All included images can be roughly divided into three types according to Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2: iconic-object images, iconic-scene images and non-iconic images (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2: Type of images in the data set (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Images are collected through two strategies: firstly images from Flickr, a platform for photos uploaded by amateur photographers, with their keywords are collected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Secondly, researchers searched for pairwise combinations of object categories like “dog + car” to gather more non-iconic images and images with rich contextual relationships (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Due to the scale of the dataset and the high cost of the annotation process, the design of a high quality annotation pipeline with efficient cost depicted a difficult task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The annotation pipeline in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 for MS COCO was split into three primary tasks: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' category labeling, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='instance spotting, and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' instance segmentation (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3: Annotation pipeline for MS COCO (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As we can see in the Fig 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3, object categories in each image were determined in the first step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Due to the large number of data sets and categories, they used a hierarchical approach instead of doing binary classification for each category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' All the 91 categories were grouped into 11 super-categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The (a) Iconic object images (b) Iconic scene images (c) Non-iconic images Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2: Example of (a)iconic object images, (b)iconic scene images, and (c)non-iconic images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='AnnotationPipeline (a) Category labeling (b) Instance spotting (c)Instancesegmentation90 3 Multimodal architectures annotator did then examine for each single instance whether it belongs to one of the given super-categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Workers only had to label one instance for each of the super-categories with a category’s icon (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For each image, eight workers were asked to label it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This hierarchical approach helped to reduce the time for labeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, the first phase still took 20k worker hours to be completed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the next step, all instances of the object categories in an image were labeled, at most 10 instances of a given category per image were labeled by each worker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In both the instance spotting and the instance segmentation steps, the location of the instance found by a worker in the previous stage could be seen by the current worker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Each image was labeled again by eight workers summing up to a total of 10k worker hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the final segmenting stage, each object instance was segmented, the seg- mentation for other instances and the specification of the object instance by a worker in the previous stage were again shown to the worker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Segmenting 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 million object instances was an extremely time consuming task which required over 22 worker hours per 1,000 segmentations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To minimize cost and improve the quality of segmentation, all workers were required to complete a training task for each object category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In order to ensure a better quality, an explicit verification step on each segmented instance was performed as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Comparison with other data sets In recent years, researchers have developed several pre-training data sets and benchmarks which helped the developemnt of algorithms for CV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Each of these data sets varies significantly in size, number of categories and types of images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the previos part, we also introduced the different research focus of some data sets like e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ImageNet (Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2009), PASCAL VOC (Everingham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2010) and SUN (Xiao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ImageNet, containing millions of images, has enabled major breakthroughs in both object classification and detection research using a new class of deep learning algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It was created with the intention to capture a large number of object categories, many of which are fine-grained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' SUN focuses on labeling scene types and the objects that commonly occur in them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Finally, PASCAL VOC’s primary application is in object detection in natural images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' MS COCO is designed for the detection and segmentation of objects occurring in their natural context (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With the help of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4, one can compare MS COCO to ImageNet, PASCAL VOC and SUN with respect to different aspects (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The number of instances per category for all 91 categories in MS COCO and PASCAL VOC is shown in subfigure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 (a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Compared to PASCAL VOC, MS COCO has both more categories and (on average) more instances per category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The number of object categories and the number of instances per 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Image2Text 91 FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4: Comparison MS COCO with PASCAL VOC, SUN and Ima- geNet (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' category for all the datasets is shown in subfigure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 (d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' MS COCO has fewer categories than ImageNet and SUN, but it has the highest average number of instances per category among all the data sets, which from the perspective of authors might be useful for learning complex models capable of precise localization (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Subfigures 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 (b) and (c) show the number of annotated categories and annotated instances per image for MS COCO, ImageNet, PASCAL VOC and SUN (average number of categories and instances are shown in parentheses).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On average MS COCO contains 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 categories and 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 instances per image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ImageNet and PASCAL VOC both have on average less than 2 categories and 3 instances per image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The SUN data set has the most contextual information, on average 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 categories and 17 instances per image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Subfigure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 (e) depicts the distribution of instance sizes for the MS COCO, ImageNet Detection, PASCAL VOC and SUN data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Discussion MS COCO is a large scale data set for detecting and segmenting objects found in everyday life, with the aim of improving the state-of-the-art in object Instances per category COCO 1,000,000 PASCAL VOC 100,000 10,000 1,000 100 Wine (a) Instances per image Categories per image 80% 60% 70% COCO (7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7) —COCO (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5) 50% PASCALVOC (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3) PASCAL VOC (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4) 50% ImageNet (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0) ImageNet (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7) SUN (17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0) SUN (9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8) 30% 30% Per 20% 10% 10% 0% 0% 6 10 11 12 13 14 15 1 2 5 6 7 9101112131415 2 4 7 8 9 Number of categories Number of instances (b) (c) Instance size Number of categories vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' number of instances 40% 1000000 35% COCO Caltech Ped Instances per category 100000 COCO PASCALVOC 10000 ImageNet PASCALVOC ImageNet ImageNet Detection Classification SUN 1000 20% nt Caltech 256 SUN 15% 100 Caltech 101 10 5% 0% 1 10 100 1000 10000 100000 4% 6% 10% 16% 25% 40% 63% 100% Number of categories Percent of image size (d) (e)92 3 Multimodal architectures recognition and scene understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It focuses on non-iconic images of objects in natural environments and contains rich contextual information with many objects present per image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' MS COCO is one of the typically used vision data sets, which are labor intensive and costly to create.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With the vast cost and over 70,000 worker hours, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 Mio instances were annotated to drive the advancement of object detection and segmentation algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' MS COCO is still a good benchmark for the field of CV (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The MS COCO Team also shows directions for future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example “stuff” label like “sky”, “grass”, and “street”, etc, may also be included in the dataset since “stuff” categories provide significant contextual information for the object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Models for Image captioning The image captioning task is generally to describe the visual content of an image in natural language, so it requires an algorithm to understand and model the relationships between visual and textual elements, and to generate a sequence of output words (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the last few years, collections of methods have been proposed for image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Earlier approaches were based on generations of simple templates, which contained the output produced from the object detector or attribute predictor (Socher and Fei-fei, 2010), (Yao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With the sequential nature of language, most research on image captioning has focused on deep learning techniques, using especially Recurrent Neural Network models (RNNs) (Vinyals et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015), (Karpathy and Fei-Fei, 2014) or one of their special variants (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' LSTMs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Mostly, RNNs are used for sequence generation as languages models, while visual information is encoded in the output of a CNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With the aim of modelling the relationships between image regions and words, graph convolution neural networks in the image encoding phase (Yao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2018a) or single-layer attention mechanisms (Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015) on the image encoding side have been proposed to incorporate more semantic and spatial relationships between objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' RNN-based models are widely adopted, however, the model has its limitation on representation power and due to its sequential nature (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Recently, new fully-attentive models, in which the use of self- attention has replaced the recurrence, have been proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' New approaches apply the Transformer architecture (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017d) and BERT (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019) models to solve image captioning tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The transformer consists of an encoder with a stack of self-attention and feed-forward layers, and a decoder which uses (masked) self-attention on words and cross-attention over the output of the last encoder layer (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In some other transformer-based approaches, a transformer-like encoder was paired with an LSTM decoder, while the aforementioned approaches have exploited the original transformer architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Others (Herdade et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019) proposed a transformer architecture for image captioning with the focus on geometric relations between input objects at the same time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Specifically, additional geometric weights between object pairs, which is used to scale attention weights, are computed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Similarly, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Image2Text 93 an extension of the attention operator, in which the final attended information is weighted by a gate guided by the context, was introduced at a similar time (Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Meshed-Memory Transformer for Image Captioning (M2) Transformer-based architectures have been widely implemented in sequence modeling tasks like machine translation and language understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, their applicability for multi-modal tasks like image captioning has still been largely under-explored (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5: M 2 Transformer (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A novel fully-attentive approach called Meshed-Memory Transformer for Image Captioning (M 2) was proposed in 2020 (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020) with the aim of improving the design of both the image encoder and the language decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Compared to all previous image captioning models, M 2 (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 has two new novelties: The encoder encodes a multi-level representation of the relationships between image regions with respect to low-level and high-level relations, and a-priori knowledge can be learned and modeled by using persistent Abaseballplayeris throwing a ball to anotherplayer Encoder Decoder Layer 1 Layer N 个 Encoder 个 Layer 2 Decoder Layer 2 Encoder Decoder Layer N Layer 1 Memory-Augmented Encoding Meshed Decoding94 3 Multimodal architectures memory vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The multi-layer architecture exploits both low- and high-level visual relationships through a learned gating mechanism, which computes the weight at each level, therefore, a mesh-like connectivity between encoder and decoder layers is created for the sentence generation process (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 M 2 Transformer Architecture FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6: M 2 Transformer Architecture (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 shows the detailed architecture of M 2 Transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It can be divided into the encoder (left) module and the decoder (right) module, both modules with multiple layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Given the input image region X, the image is passed through the attention and feed forward layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The relationship between image regions with a-priori knowledge will be encoded in each encoding layer, the output of each encoding layers will be read by decoding layers to generate the caption for image word by word (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' All interactions between word and image-level features of the input image X are modeled by using scaled dot-product attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Attention operates on vectors of queries q, keys k and values n, and takes a weighted sum of the value vectors according to a similarity distribution between query and key vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Attention can be defined as follows (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020): Attention(Q, K, V ) = softmax(QKT √ d )V (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1) where Q is a matrix of nq query vectors, K and V both contain nk keys and values, all the vectors has the same dimensionality, and d is a scaling factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Memory-Augmented Encoder For the given image region X, attention can be used to obtain a permutation invariant encoding of X through the self-attention operations, the operator from the Transformer can be defined as follows (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020): X Y XN X masked self-attention Q query Encoder Decoder key query key memory Encoder Decoder value value > K Layer 1 Layer 1 key M cross-attention cross-attention Encoder Decoder attention Layer 2 Layer 2 memory FC FC value Encoder Decoder Layer N Layer N feed-forward Memory-Augmented Encoder feed-forward x Meshed Decoder Y3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Image2Text 95 S(X) = Attention(WqX, WkX, WvX) (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2) In this case, queries, keys, and values are linear projections of the input features, and Wq, Wk, Wv are their learnable weights, they depend solely on the pairwise similarities between linear projections of the input set X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The self-attention operator encodes the pairwise relationships inside the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But self-attention also has its limitation: a prior knowledge on relationships between image regions can not be modelled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To overcome the limitation, the authors introduce a Memory-Augmented Attention operator by extending the keys and values with additional prior information, which does not depend on image region X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The additional keys and values are initialized as plain learnable vectors which can be directly updated via SGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The operator can be defined as follows (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020): Mmem(X) = Attention(WqX, K, V ) K = [WkX, Mk] V = [WvX, Mv] (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3) Mk and Mv are learnable matrices, with nm rows, [·,·] indicates concatenation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The additional keys and value could help to retrieve a priori knowledge from input while keeping the quries unchanged (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For the Encoding Layer, a memory-augmented operator d is injected into a transformer-like layer, the output is fed into a position-wise feed-forward layer (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020): F(X)i = Uσ(V Xi + b) + c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4) Xi indicates the i-th vector of the input set, and F(X)i the i-th vector of the output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Also, σ(ů) is the ReLU activation function, V and U are learnable weight matrices, b and c are bias terms (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Each component will be complemented by a residual connection and the layer norm operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The complete definition of an encoding layer can be finally written as (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020): Z = AddNorm(Mmem(X)) ˜X = AddNorm(F(Z)) (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5) Finally the Full Encoder has multiple encoder layers in a sequential fashion, therefore the i-th layer uses the output set computed by layer i − 1, higher encoding layers can exploit and refine relationships identified by previous layers, n encoding layers will produce the output ˜X = ( ˜X1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ˜Xn) (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 96 3 Multimodal architectures 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Meshed Decoder The decoder depends on both previously generated words and image region encodings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Meshed Cross-Attention can take advantage of all the encoder layers to generate captions for the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On the right side of the Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 the structure of the meshed decoder is shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The input sequence vector Y and the outputs from all encoder layers ˜X are connected by the meshed attention operator gated through cross-attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The meshed attention operator can is formally defined as (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020): Mmesh( ˜X, Y ) = N � i=1 αiC( ˜ Xi, Y ) (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6) C(ů, ů) stands for the encoder-decoder cross-attention, it is defined with queries from decoder, while the keys and values come from the encoder (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' C( ˜ Xi, Y ) = Attention(WqY, Wk ˜ Xi, Wv ˜ Xi) (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7) αi is a matrix of weights of the same size as the cross-attention results, αi models both single contribution of each encoder layer and the relative importance between different layers (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' αi = σ(Wi[Y, C( ˜ Xi, Y )] + bi) (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8) The [·,·] indicates concatenation and σ(ů) is the sigmoid activation function here, Wi is a weight matrix, and bi is a learnable bias vector (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In decoder layers the prediction of a word should only depend on the previ- ously generated word, so the decoder layer comprises a masked self-attention operation, which means that the operator can only make connections between queries derived from the t-th element of its input sequence Y with keys and values from left sub-sequence, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Y≤t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Simlilar as the encoder layers, the decoder layers also contain a position-wise feed-forward layer, so the decoder layer can be finally defined as (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020): Z = AddNorm(Mmesh(X, AddNorm(Smask(Y ))) ˜Y = AddNorm(F(Z)), (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9) where Smask indicates a masked self-attention over time (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The full decoder with multiple decoder layers takes the input word vectors as well as the t-th element (and all elements prior to it) of its output sequence 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Image2Text 97 to make the prediction for the word at t + 1, conditioned on Y≤t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Finally the decoder takes a linear projection and a softmax operation, which can be seen as a probability distribution over all words in the vocabulary (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Comparison with other models on the MS COCO data sets The M 2 Transformer was evaluated on MS COCO, which is still one of the most commonly used test data set for image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Instead of using the original MS COCO dat set, Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020) follow the split of MS COCO provided by Karpathy and Fei-Fei (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Karpathy uses 5000 images for validation, 5000 images for testing and the rest for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For model evaluation and comparison, standard metrics for evaluating gener- ated sequences, like BLEU (Papineni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2002), METEOR (Banerjee and Lavie, 2005), ROUGE (Lin, 2004), CIDEr (Vedantam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015), and SPICE (Anderson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2016), which have been introduced in the second chapter, are used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7: Comparison of M 2 with Transformer-based alternatives (Cor- nia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020) The transformer architecture in its original configuration with six layers has been applied for captioning, researchers speculated that specific architectures might be required for captioning, so variations of the original transformer are compared with M 2 Transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Other variations are a transformer with three layers and the “Attention on Attention” (AoA) approach (Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019) to the attentive layers, both in the encoder and in the decoder (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The second part intends to evaluate the importance of the meshed connections between encoder and decoder layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' M 2 Transformer (1 to 1) is a reduced version of the original M 2 Transformer, in which one encoder layer is connected to only corresponding decoder layer instead of being B-1 B-4 M R c S Transformer (w/ 6 layers as in [39]) 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 121.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 Transformer (w/ 3 layers) 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 123.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Transformer (w/ AoA [14]) 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 M?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Transformerl-to-1 (w/o mem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=') 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 M2 Transformer (w/o mem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=') 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 M2 Transformer (w/ softmax) 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 130.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 M2 Transformer 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 131.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='698 3 Multimodal architectures connected to all the decoder layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As one can see from the Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7, the original Transformer has a 121.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 CIDEr score, which is lower than the reduced version of M 2 Transformer, showing an improvement to 129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 CIDEr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With respect to meshed connectivity, which helps to exploit relationships encoded at all layers and weights them with a sigmoid gating, one can observe a further improvement in CIDEr from 129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 to 131.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Also the role of memory vectors and the softmax gating schema for M 2 Transformer are also included in the table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Eliminating the memory vector leads to a reduction of the performance by nearly 1 point in CIDEr in both the reduced M 2 Transformer and the original M 2 Transformer (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8: Comparison with the state-of-the-art on the “Karpathy” test split, in single-model setting (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Fig 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 compares the performance of M 2 Transformer with several recently proposed models for image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' SCST (Rennie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017) and Up- Down (Anderson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2018), use attention over the grid of features and attention over regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' RFNet (?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=') uses a recurrent fusion network to merge different CNN features;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' GCN-LSTM (Yao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2018b) uses a Graph CNN to exploit pairwise relationships between image regions;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' SGAE (Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019) uses scene graphs instead ofauto-encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The original AoA-Net (Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019) approach uses attention on attention for encoding image regions and an LSTM language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Finally, the ORT (Herdade et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019) uses a plain transformer and weights attention scores in the region encoder with pairwise distances between detections (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8, the M 2 Transformer exceeds all other models on BLEU-4, ME- TEOR, and CIDEr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The performance of the M 2 Transformer was very close and competitive with SGAE on BLEU-1 and with ORT with respect to SPICE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' B-1 B-4 M R c S SCST [33] 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 114.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 Up-Down [4] 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 RFNet [15] 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 121.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Up-Down+HIP [49] 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 127.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 GCN-LSTM [48] 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 127.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 SGAE [46] 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 127.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 ORT [13] 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 AoANet [14] 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 M?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Transformer 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 131.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Image2Text 99 FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9: Examples of captions generated by M 2 Transformer and the original Transformer model, as well as the corresponding ground-truths (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 shows some examples of captions generated by M 2 Transformer and the original transformer model, as well as the corresponding ground-truths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' According to the selected examples of captions, M 2 Transformer shows the ability to generate more accurate descriptions of the images, and the approach could detect the more detailed relationships between image regions (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The M 2 Transformer is a new transformer-based architecture for image cap- tioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It improves the image encoding by learning a multi-level representation of the relationships between image regions while exploiting a priori knowledge from each encoder layer, and uses a mesh-like connectivity at decoding stage to exploit low- and high-level features at the language generation steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The results of model evaluation with MS COCO shows that the performance of the M 2 Transformer approach surpasses most of the recent approaches and achieves a new state of the art on MS COCO (Cornia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' GT:Aman milking a brown and white cow in GT: A man in a red Santa hat and a dog pose barn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' infrontofaChristmastree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Transformer:Aman is standingnexttoa Transformer:AChristmastreeinthesnow cow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' with a Christmas tree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' M2Transformer:Aman ismilkinga cowin M?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Transformer:Aman wearing a Santa hat a barn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' with a dog in front of a Christmas tree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' GT: A woman withblue hair and a yellow um- GT: Several people standing outside a parked brella.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' white van.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Transformer:A woman is holding an um- Transformer: A group of people standing out- brella.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' side of abus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' M2Transformer:Awomanwithbluehair M2 Transformer: A group of people stand- holdingayellowumbrella.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ing around awhitevan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' GT: Several zebras and other animals grazing GT: A truck sitting on a field with kites in the in a field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' air.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Transformer: A herd of zebras are standing in Transformer: A group of cars parked in a field a field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' witha kite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' M2Transformer:A herd of zebras and other M2Transformer:Awhite truck isparked in animals grazing in a field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' a field withkites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' GT: A woman who is skateboarding down the GT:Orangecatwalkingacrosstwored suit street.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' casesstackedonfloor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Transformer:Awomanwalkingdowna Transformer:An orange cat sitting on top of street talking on a cell phone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' a suitcase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' M2 Transformer: A woman standing on a M2Transformer:An orange cat standingon skateboard on a street.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' top of two red suitcases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' GT: Some people are standing in front of a red GT: A boat parked in a field with long green food truck.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Transformer:Agroup ofpeople standing in grass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Transformer: A field of grass with a fence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' front ofabus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' M2Transformer:Agroup ofpeoplestand- M2Transformer:Aboatinthemiddleofa field of grass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ing outside of a food truck.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='100 3 Multimodal architectures 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Text2Image Author: Karol Urbańczyk Supervisor: Jann Goschenhofer Have you ever wondered what a painting artist could paint for you if you ordered a high-quality oil painting of a psychedelic hamster dragon?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Probably not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Nevertheless, one of the answers could be: FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='10: Hamster dragon The catch is that there is no human artist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The above picture comes from a 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5-billion parameter model called GLIDE by OpenAI (Nichol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Every single value of every pixel was generated from a distribution that the model had to learn in the first place.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Before generating the image, GLIDE abstracted the concepts of ‘hamster’ and ‘dragon’ from looking at millions of training images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Only then, it was able to create and combine them successfully into a meaningful visual representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Welcome to the world of current text- to-image modelling!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The cross-modal field of text-to-image models has developed significantly over recent years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' What was considered unimaginable only a few years ago, today constitutes a new benchmark for researchers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' New breakthroughs are being published every couple of months.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Following these, possible business use cases are emerging, which attracts investment from the greatest players in AI research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, a further trend of closed-source models is continuing and the text-to-image field is probably one the most obvious ones where it can be noticed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' We might need to get used to the fact that the greatest capabilities will soon be monopolized by few companies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At the same time, the general public is becoming aware of the field itself and the disruption potential it brings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Crucial questions are already emerging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Text2Image 101 What constitutes art?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' What does the concept of being an author mean?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The result of a generative model is in a sense a combination, or variation, of the abstracts it has seen in the past.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But the same stands for a human author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore, is a discussion about the prejudices and biases needed?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Answers to all of these will require refinement through an extensive discussion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The last section of this chapter will try to highlight the most important factors that will need to be considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, the primary intention of this chapter is to present the reader with a perspective on how the field was developing chronologically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Starting with the introduction of GANs, through the first cross-domain models, and ending with state-of-the-art achievements (as of September 2022), it will also try to grasp the most important concepts without being afraid of making technical deep dives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The author is aware that since the rapid development pace makes it nearly impossible for this section to stay up-to-date, it might very soon not be fully covering the field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, it must be stressed that the cutting-edge capabilities of the recent models tend to come from the scale and software engineering tricks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore, focusing on the core concepts should hopefully gives this chapter a universal character, at least for some time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This design choice also explains why many important works did not make it to this publication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Just to name a few of them as honorable mentions: GAWWN (Reed et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2016a), MirrorGAN (Qiao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019), or most recent ones: LAFITE (Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021), Make-a-Scene (Gafni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022) or CogView (Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In one way or another, all of them pushed the research frontier one step further.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore, it needs to be clearly stated - the final selection of this chapter’s content is a purely subjective decision of the author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Seeking objectivity Before diving into particular models, we introduce objective evaluation proce- dures that help assess the performance of consecutive works in comparison to their predecessors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Unfortunately, objectivity in comparing generative models is very hard to capture since there is no straight way to draw deterministic conclusions about the model’s performance (Theis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, multiple quantitative and qualitative techniques have been developed to make up for it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Unfortunately, there is no general consensus as to which measures should be used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An extensive comparison has been performed by Borji (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A few of the most widely used ones in current research are presented below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Inception Score (IS) Introduced by Salimans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2016), Inception Score (IS) uses the Inception Net (Szegedy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015) trained on ImageNet data to classify the fake images generated by the assessed model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Then, it measures the average KL diver- 102 3 Multimodal architectures gence between the marginal label distribution p(y) and the label distribution conditioned on the generated samples p(y|x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' exp(E x[KL(p(y|x)||p(y))]) p(y) is desired to have high diversity (entropy), in other words: images from the generative model should represent a wide variety of classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On the other hand, p(y|x) is desired to have low diversity, meaning that images should represent meaningful concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' If a range of cat images is being generated, they all should be confidently classified by Inception Net as cats.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The intention behind IS is that a generative model with a higher distance (KL divergence in this case) between these distributions should have a better score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' IS is considered a metric that correlates well with human judgment, hence its popularity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Fréchet Inception Distance (FID) A metric that is generally considered to improve upon Inception Score is the Fréchet Inception Distance (FID).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Heusel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017) argue that the main drawback of IS is that it is not considering the real data at all.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore, FID again uses Inception Net, however this time it embeds the images (both fake and real samples) into feature space, stopping at a specific layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In other words, some of the ultimate layers of the network are being discarded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Feature vectors are then assumed to follow a Gaussian distribution and the Fréchet distance is calculated between real and generated data distributions: d2((m, C), (mw, Cw)) = ||m − mw||2 2 + Tr(C + Cw − 2(CCw)1/2) where (m, C) and (mw, Cw) represent mean and covariance of generated and real data Gaussians respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Obviously, low FID levels are desired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FID is considered to be consistent with human judgement and sensitive to image distortions, which are both desired properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='11 shows how FID increases (worsens) for different types of noise being added to images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Precision / Recall Precision and recall are one of the most widely used metrics in many Machine Learning problem formulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, their classic definition cannot be applied to generative models due to the lack of objective labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Sajjadi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2018) came up with a novel definition of these metrics calculated directly from distributions, which was further improved by Kynkäänniemi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The argument behind the need for such an approach is that metrics such as IS or FID provide only a one-dimensional view of the model’s performance, ignoring the trade-off between precision and recall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A decent FID result might very well mean high recall (large variation, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' wide range of data represented by the model), high precision (realistic images), or anything in between.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Let Pr denote the probability distribution of the real data, and Pg be the 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Text2Image 103 FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='11: FID is evaluated for different noise types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' From upper left to lower right: Gaussian noise, Gaussian blur, implanted black rectangles, swirled images, salt and pepper, CelebA dataset contaminated by ImageNet images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Heusel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' distribution of the generated data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In short, recall measures to which extend Pr can be generated from Pg, while precision is trying to grasp how many generated images fall within Pr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='12: Definition of precision and recall for distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Kynkäänniemi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' See Kynkäänniemi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019) for a more thorough explanation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' CLIP score CLIP is a model from OpenAI [CLIP2021] which is explained in detail in the chapter about text-supporting computer vision models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In principle, CLIP is capable of assessing the semantic similarity between the text caption and the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Following this rationale,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the CLIP score can be used as metric and is ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='defined as: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='E[s(f(image) ∗ g(caption))] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='400 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='400 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='400 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='350 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='350 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='350 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='300 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='00 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='00 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='250 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='250 - ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='250 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='00 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='150 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='150 - ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='150 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='50 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='50 - ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='50 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='disturbance level ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='disturbance level ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='disturbance level ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='250 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='300 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='600 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='200 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='250 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='500 - ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='200 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='150 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='400 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='150 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='200 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='50 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='50 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='disturbance level ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='disturbance level ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='disturbance levelP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='(a)Exampledistributions ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='(b) Precision ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='(c) Recall104 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Multimodal architectures ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='where the expectation is taken over the batch of generated images and s is the ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='CLIP logit scale (Nichol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Human evaluations It is common that researchers report also qualitative measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Many potential applications of the models are focused on deceiving the human spectator, which motivates reporting of metrics that are based on human evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The general concept of these evaluation is to test for: photorealism caption similarity (image-text alignment) Usually, a set of images is presented to a human, whose task is to assess their quality with respect to the two above-mentioned criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Generative Adversarial Networks The appearance of Generative Adversarial Networks (GAN) was a major milestone in the development of generative models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Introduced by Goodfellow et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014c), the idea of GANs presented a novel architecture and training regime, which corresponds to a minimax two-player game between a Generator and a Discriminator (hence the word adversarial).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' GANs can be considered as an initial enabler for the field of text-to-image models and for a long time, GAN-like models were achieving state-of-the-art results, hence the presentation of their core concepts in this chapter 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Vanilla GAN for Image Generation In a vanilla GAN, the Generator model (G) and Discriminator model (D) are optimized together in a minimax game, where G aims at generating a sample so convincing, that D will not be able to distinguish whether it comes from a real or generated image distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On the other hand, D is being trained to discriminate between the two.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Originally, a multilayer perceptron was proposed as a model architecture for both D and G, although in theory any differentiable function could be used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' More formally, let pz denote the prior distribution defined on the input noise vector z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Then, the generator G(z) represents a function that is mapping this noisy random input to the generated image x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The discriminator D(x) outputs a probability that x comes from the real data rather than generator’s distribution pg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this framework, D shall maximize the probability of guessing the correct label of both real and fake data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' G is trained to minimize log(1 − D(G(z))).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Now, such representation corresponds to the following value function (optimal solution): min G min D V (D, G) = E x∼pdata(x)[log(D(x))] + E z∼pz(z)[log(1 − D(G(z)))] 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Text2Image 105 Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='13 depicts this process in a visual way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='13: GAN framework as proposed in Goodfellow et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Some of the generated samples that had been achieved with this architecture already in 2014 can be seen in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='14: Samples from generators trained on different datasets: a) MNIST b) TFD, c) CIFAR-10 (MLP used for G and D) d) CIFAR-10 (CNN used).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Highlighted columns show the nearest real example of the neighbouring sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Goodfellow et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Conditioning on Text So far, only image generation has been covered, completely ignoring textual input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Reed et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2016c) introduced an interesting concept of conditioning DC-GAN (GAN with CNNs as Generator and Discriminator) on textual embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A separate model is being trained and used for encoding the text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Then, result embeddings are concatenated with the noise vector and fed into the Generator and the Discriminator takes embeddings as an input as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The resulting model is referred to as GAN-INT-CLS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Both abbreviations (INT Back Propagation: Maximize Error Latent Generator Generator Space Image Real Discriminator or Fake Real Dataset Back Propagation: Minimize Error Imageb) C106 3 Multimodal architectures and CLS) stand for specific training choices, which are going to be explained later in the chapter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The overview of the proposed architecture can be seen in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='15: The proposed architecture of the convolutional GAN that is conditioned on text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Text encoding ϕ(t) is fed into both the Generator and the Discriminator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Before further convolutional processing, it is first projected to lower dimensionality in fully-connected layers and concatenated with image feature maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Reed et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2016c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Text embeddings Since regular text embeddings are commonly trained in separation from visual modality simply by looking at the textual context, they are not well suited for capturing visual properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This motivated Reed et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2016b) to come up with structured joint embeddings of images and text descriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' GAN-INT-CLS implements it in a way described in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='16: Figure from Reed et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2016c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This flower has small, round violet This flower has small, round violet petals with a dark purple center :=G(z,(t) petals with a dark purple center ()0) z~N(0, 1) D(,p(t) Generator Network Discriminator NetworkThetextclassifierinducedbythelearned correspondence function ft is trained by optimizing the following struc- tured loss: N (2) n=1 where ((un, tn, yn) : n = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='., N) is the training data set, is the 0-1 loss, n are the images, tn are the correspond- ing text descriptions, and yn are the class labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Classifiers fu and ft are parametrized as follows: f(u) = arg max Et~T(y)[Φ(u)T p(t)] (3) ft(t) = arg max Ey~v(y) [Φ(u)T (t)) (4) where@istheimageencoder(e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='adeepconvolutional neuralnetwork),isthetextencoder(e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='acharacter- level CNN or LSTM), T(y) is the set of text descriptions of class y and likewiseV(y)for images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='The intuition here is that a text encoding should have a higher compatibility scorewithimagesofthecorrespondongclasscomparedto any other class and vice-versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Text2Image 107 GoogLeNet is being used as an image encoder φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For text encoding ϕ(t), authors use a character-level CNN combined with RNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Essentially, the objective of the training is to minimize the distance between the encoded image and text representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The image encoder is then discarded and ϕ only is used as depicted in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' GAN-CLS CLS stands for Conditional Latent Space, which essentially means the GAN is conditioned on the embedded text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, in order to fully grasp how exactly the model is conditioned on the input, we need to go beyond architec- tural choices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is also crucial to present a specific training regime that was introduced for GAN-CLS and the motivation behind it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One way to train the system is to view text-image pairs as joint observations and train the discriminator to classify the entire pair as real or fake.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, in such a case the discriminator does not have an understanding of whether the image matches the meaning of the text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is because the discriminator does not distinguish between two types of error that exist, namely when the image is unrealistic or when it is realistic but the text does not match.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A proposed solution to this problem is to present the discriminator with three observations at a time, all of which are included later in the loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These three are: {real image with right text}, {real image with wrong text}, {fake image with right text}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The intention is that the discriminator should classify them as {true}, {false}, {false}, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' GAN-INT The motivation behind this concept comes from the fact that interpolating between text embeddings tends to create observation pairs that are still close to the real data manifold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Therefore, generating additional synthetic text embeddings and using them instead of real captions in the training process might help in the sense that it works as a form of data augmentation and helps regularize the training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='17 might be helpful for developing the intuition behind the interpolation process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Results The model achieves the best performance when both of the mentioned methods are in use (GAN-INT-CLS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Models prove to successfully transfer style (pose of the objects) and background from the training data when trained on CUB (birds) and Oxford-102 (flowers) datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They also show interesting zero-shot abilities, meaning they can generate observations from unseen test classes (Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='18).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' When trained on MS-COCO, GAN-CLS proves its potential to generalize over many domains, although the results are not always coherent (Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 108 3 Multimodal architectures FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='17: Interpolating between sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Reed et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2016c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='18: Zero-shot generated birds using GAN, GAN-CLS, GAN-INT, GAN-INT-CLS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Reed et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2016c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Further GAN-like development Generative Adversarial Networks were a leading approach for text-to-image models for most of the field’s short history.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the following years after the introduction of GAN-INT-CLS, new concepts were emerging, trying to push the results further.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Many of them had a GAN architecture as their core part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this section, a few such ideas are presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The intention is to quickly skim through the most important ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A curious reader should follow the corresponding papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=" StackGAN Blue bird with black beak' - 'Red bird with black beak Small blue bird with black wings' -→ Small yellow bird with black wings This bird is bright." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content="' → ‘This bird is dark." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=" 'an all black bird this small bird has tiny beak, tarsus and a tiny bird, with yellow breast, shades of brown all GT with a distinct brown crown, and fect, a blue crown, over with white and light grey head and thick." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='rounded bill ercilian blue coverts, and black check patcl bsad and back GAN GAN-CLS GAN-INT GAN-INT CLS3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Text2Image 109 FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='19: Generated images using GAN-CLS on MS-COCO validation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Reed et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2016c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2016a) introduced what the StackGAN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The main contribution of the paper which also found its place in other researchers’ works, was the idea to stack more than one generator-discriminator pair inside the architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Stage-II (second pair) generator is supposed to improve the results from Stage-I, taking into account only: text embedding (same as Stage-I) image generated in Stage-I without a random vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Deliberate omission of the random vector results in the generator directly working on improving the results from Stage-I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The purpose is also to increase resolution (here from 64x64 to 256x256).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Authors obtained great results already with two stages, however, in principle architecture allows for stacking many of them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='20: (ref:stackgan) AttnGAN It is 2017 and many researchers believe attention is all they need (Vaswani GT ours GT ours GT Ours Jodnou6e amaninawet suit riding a apitcher is people on skis surfboard on a Moun oninoqe stand on the wave.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the ball to the snow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' batter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' a table with twoplatesot many plates of lood that include apicture ofa pue pooj beans, veryclean drinks guacamole and living room.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' rice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' two giraffe standing next agreenplant that is growing a sheep to each other standing in a in a forest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' out of the ground.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' opengrass field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' alarge blue octopus kite flies above there is only one atoilet in a small the people horse in the room with a grassy field window and having fun at the beach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' unfinished wallsThis bird is The bird has This is a small, This bird is This bird is This bird has A white bird white, black, small beak, black bird with white black and Text bluewithwhite wings that are with a black and brown in with reddish a white breast yellow in color, description and has a very brownand has crown and color, with a browncrown and white on with a short short beak a yellow belly yellow beak brown beak and gray belly the wingbars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' black beak Stage-I images Stage-II images110 3 Multimodal architectures et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017e).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Probably for the first time in text-to-image generation attention mechanism was used by Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The authors combined the idea with what StackGAN proposed and used three stages (generators G0, G1 and G2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, this time first layers of a particular generator are attending to word feature vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This mechanism not only helps control how particular areas of the image are being improved by consecutive generators but also allows for visualizing attention maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='21: Images generated by G0, G1, G2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Two bottom rows show 5 most attended words by G1 and G2 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' DM-GAN Another important milestone was DM-GAN (Dynamic Memory GAN) (Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At that time, models were primarily focusing on generating the initial image and then refining it to a high-resolution one (as e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' StackGAN does).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, such models heavily depend on the quality of the first image initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This problem was the main motivation for the authors to come up with a mechanism to prevent it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' DM-GAN proposes a dynamic memory module, which has two main components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' First, its memory writing gate helps select the most important information from the text based on the initial image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Second, a response gate merges the information from image features with the memories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Both of these help refine the initial image much more effectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' DF-GAN this bird has a green crown black primaries and a white belly 1:bird O:this 2:has 11:belly 10:white 6:black 4:green 10:white O:this 1:bird3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Text2Image 111 Last but not least, DF-GAN (Deep Fusion GAN) (Tao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020) improves the results by proposing three concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One-Stage Text-to-Image Backbone focuses on providing an architecture that is capable of abandoning the idea of multiple stacked generators and using a single one instead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It achieves that by a smart combination of a couple of factors, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' hinge loss and the use of residual blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Additionally, Matching-Aware Gradient Penalty helps achieve high semantic consistency between text and image and regularizes the learning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Finally, One-Way Output helps the process converge more effectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Dall-E 1 OpenAI’s Dall-E undoubtedly took the text-to-image field to another level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For the first time, a model showed great zero-shot capabilities, comparable to previous domain-specific models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To achieve that, an unprecedented scale of the dataset and training process was needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 250 million text-image pairs were collected for that purpose, which enabled training of a 12-billion parameter version of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Unfortunately, Dall-E is not publicly available and follows the most recent trend of closed-source models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Or, to put it more precisely, it started this trend, and GLIDE, Dall-E 2, Imagen, Parti and others followed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Nevertheless, Dall-E’s inner workings are described in Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021b) and this section will try to explain its most important parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, before that, it is crucial to understand one of the fundamental concepts that has been around in the field of generative models for already quite some time - namely Variational Autoencoders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Variational Autoencoder (VAE) The regular Autoencoder architecture aims at finding an identity function that is capable of finding a meaningful representation of the data in lower- dimensional space and then reconstructing it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is considered an unsupervised learning method for dimensionality reduction, however, trained in a supervised regime with the data itself being the label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The component performing the reduction is called an encoder, while the part responsible for the reconstruction is called a decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The idea behind Variational Autoencoder (Kingma and Welling, 2013) is similar, however, instead of learning the mapping to a static low-dimensional vector, the model learns its distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This design equips the decoder part with desired generative capabilities, as sampling from the latent low-dimensional space will result in varying data being generated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The architecture is depicted in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' qφ(z|x) denotes the encoder under the assumption that z comes from multivari- ate Gaussian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' µ and σ are being learned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Reconstruction process is modelled by conditional probability pθ(x|z), given samples latent vector z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VQ-VAE / dVAE The VQ-VAE (Vector Quantized VAE) (van den Oord et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017) differs from 112 3 Multimodal architectures FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='22: Variational (probabilistic) Autoencoder architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Weng (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the regular VAE in the way it approaches encoding the latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Instead of mapping data into a continuous distribution, the Vector Quantized version does it in a discrete way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is motivated by the fact that for many data modalities it is more natural to represent them in a discrete way (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' speech, human language, reasoning about objects in images, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VQ-VAE achieves that by using a separate codebook of vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The architecture is depicted in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='23: VQ-VAE architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from van den Oord et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The idea is to map the output of the encoder to one of the vectors from the K-dimensional codebook.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This process is called quantization and essentially means finding the vector that is the nearest neighbour to the encoder’s output (in a sense of Euclidean distance).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Since this moment, this newly found vector from the codebook is going to be used instead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The codebook itself is also subject to the learning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One could argue that passing gradients during the training through such a discrete system might be problematic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VQ-VAE overcomes this problem by simply copying gradients from the decoder’s input Reconstructed Input ldeally they are identical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' input x~x Probabilistic Encoder q(zx) Mean Sampled μ latent vector Probabilistic x Z Decoder pe(x|z) Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' dev An compressed low dimensiona z=μ+OE representation of the input E ~ N(O, I)e,e,e3 Embedding 000 Space (X) Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (x) q(z|x) CNN CNN p(x(z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=') (xz)b ~ (x)°z Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (x) Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (x) 53 Encoder Decoder3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Text2Image 113 to the encoder’s output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A great explanation of the training process and further mathematical details can be found in Weng (2018) and Snell (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Dall-E, however, is using what is called dVAE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Essentially, it is a VQ-VAE with a couple of details changed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In short, the main difference is that instead of learning a deterministic mapping from the encoder’s output to the codebook, it produces probabilities of a latent representation over all codebook vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Dall-E system Dall-E is composed of two stages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The above introduction of VQ-VAE was necessary to understand the first one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Essentially, it is training dVAE to compress 256x256 images into a 32x32 grid of tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This model will play a crucial role in the second stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The second stage is about learning the prior distribution of text-image pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' First, the text is byte-pair (Sennrich et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015a) encoded into a maximum of 256 tokens, where the vocabulary is of size 16384.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Next, the image represen- tation encoded by previously trained dVAE is unrolled (from 32x32 grid to 1024 tokens) and concatenated to the text tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This sequence (of 256+1024 tokens) is used as an input for a huge transformer-like architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Its goal is to autoregressively model the next token prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' During inference time, the text caption is again encoded into 256 tokens at most.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The generation process starts with predicting all of the next 1024 image-related tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They are later decoded with the dVAE decoder that was trained in the first step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Its output represents the final image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Results Results achieved with the original Dall-E attracted so much attention mainly due to its diversity and zero-shot capabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Dall-E was capable of producing better results compared to previous state-of-the-art models which were trained on data coming from the same domain as data used for evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One comparison can be seen in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Outputs of some of the prior approaches described in this chapter compared with Dall-E can be seen in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Limitations Although Dall-E made a huge step forward in text-to-image modelling, it still showed multiple flaws.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' First, photorealism of the outputs is still relatively low.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In other words, when prompted for images containing realistic situations, it is rarely capable of deceiving human evaluators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Second, the model has evident problems with understanding relatively complex abstractions, such as text inside an image, or relative object positions in the scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 114 3 Multimodal architectures FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='24: Human evaluation of Dall-E vs DF-GAN on text captions from the MS-COCO dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' When asked for realism and caption similarity, evaluators preferred Dall-E’s results over 90\\% of the time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='25: Comparison of the results from Dall-E vs prior works on MS-COCO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Dall-E’s outputs are chosen as the best out of 512 images, ranked by a contrastive model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=" 100% %0'06 93." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3% Number of Votes 0/5 1/5 75% 2/5 3/5 4/5 5/5 50% Majority vote 25% 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0% 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6% 0% DF-GAN Ours DF-GAN Ours Realism Accuracy3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Text2Image 115 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 GLIDE Introduced by Nichol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021b), GLIDE started an era of huge-scale diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The concept of diffusion has already been used in the area of Deep Learning for some time before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, the authors of GLIDE took a step further and combined it together with text-based guidance which is supposed to steer the learning process in the direction of the text’s meaning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This powerful method was proven to achieve outstanding results which remain competitive with current state-of-the-art models at the time of writing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Diffusion models Before understanding the inner workings of GLIDE, it is important to introduce the core concept that is driving it, namely diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The idea of diffusion originates from physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In short, it corresponds to the process of diffusing particles, for example of one fluid in another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Normally it has a unidirectional character, in other words, it cannot be reversed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, as Sohl-Dickstein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2015) managed to show, and Ho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020a) later improved, if the data diffusion process is modelled as a Markov chain with Gaussian noise being added in consecutive steps, it is possible to learn how to reverse it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This reversed process is exactly how images are generated by the model from pure random noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Let us construct a Markov chain, where the initial data point is denoted by x0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In t steps, Gaussian noise is added to the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The distribution of the data at t-step can be characterized in the following way: q(xt|xt−1) := N(xt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' √αtxt−1, (1 − αt)I) where (1 − αt) parametrizes the magnitude of the noise being added at each step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Now, if xt−1 was to be reconstructed from xt, a model needs to learn to predict estimates of gradients from the previous steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The probability distribution of previous steps can be estimated as follows: pθ(xt−1|xt) = N(xt−1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' µθ(xt), Σθ(xt)) where the mean function µθ was proposed by Ho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For a more detailed explanation of how this is later parametrized and trained, one could follow Weng (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' GLIDE system GLIDE can essentially be broken down into two parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The first of them is the pretrained Transformer model, which in principle is responsible for creating the text embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The last token embedding is used as a class embedding (text representation) in later stages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Additionally, all tokens from the last embedding layer are being used (attended to) by all attention layers in the diffusion model 116 3 Multimodal architectures itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This makes the model aware of the text meaning while reconstructing the previous step in the Markov chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The second component of the GLIDE is the diffusion model itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A U-Net- like architecture with multiple attention blocks is used here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This part’s sole goal is to model pθ(xt−1|xt, y), where y corresponds to last token embedding mentioned above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Or, to put it differently, to predict ϵθ(xt|y) since the problem can be reframed as calculating the amount of noise being added at each step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Additionally, to make the model even more aware of the text’s meaning, guidance is being used at inference time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In short, the idea is to control the direction of the diffusion process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The authors test two different approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' First, they try guidance with the use of a separate classifier, OpenAI’s CLIP in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, better results were in general achieved by the classifier-free guidance process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The idea is to produce two different images at each step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One is conditioned on text, while the other one is not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Distance between them is calculated and then, after significant scaling, added to the image obtained without conditioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This way, the model speeds up the progression of the image towards the meaning of the text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This process can be written as: ˆϵθ(xt|y) = ϵθ(xt|∅) + s ∗ (ϵθ(xt|y) − ϵθ(xt|∅)) where s denotes the parameter for scaling the difference between the mentioned images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Results GLIDE achieves significantly more photorealistic results compared to its predecessors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FID scores reported on the MS-COCO 256x256 dataset can be seen in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is worth noting that GLIDE was not trained on this dataset, hence its zero-shot capabilities are even more impressing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='26: Comparison of FID on MS-COCO 256×256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Nichol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Results are also preferred by human evaluators in terms of photorealism and Model FID Zero-shot FID AttnGAN (Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017) 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='49 DM-GAN (Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019) 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='64 DF-GAN (Tao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020) 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='42 DM-GAN + CL (Ye et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='79 XMC-GAN (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='33 LAFITE (Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='12 DALL-E (Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) ~ 28 LAFITE (Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='94 GLIDE 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='24 GLIDE (Validation filtered) 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='893.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Text2Image 117 the similarity of the image to its caption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A comparison to DALL-E 1 results can be seen in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='27 FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='27: Win probabilities of GLIDE vs DALL-E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Nichol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Finally, some of the cherry-picked images together with their corresponding captions can be seen in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='28: Samples from GLIDE with classifier-free-guidance and s=3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Nichol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Limitations GLIDE suffers from two problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' First, it fails when being presented with a complex or unusual text prompt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A few examples can be seen in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Also, the model is relatively slow at inference time (much slower than GANs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is caused by the sequential character of the architecture, where consecutive steps in Markov chain reconstruction cannot be simply parallelized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 Dall-E 2 / unCLIP The contribution that probably attracted the most attention in the field is known under the name Dall-E 2 (Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For the first time, the wider public had picked interest in its potential applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This might be due to a great PR that could be seen from the authors, namely OpenAI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Dall-E 2, also known as just Dall-E, or unCLIP, has been advertised as a successor of Dall-E 1, on which results it significantly improved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In reality, DALL-E Photo- Caption Temp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' realism Similarity 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 91% 83% No reranking 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='85 84% 80% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 89% 71% DALL-E reranked 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='85 87% 69% DALL-E reranked 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 72% 63% + GLIDE blurred 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='85 66% 61%文 “"a crayon drawing of a space elevator" "a futuristic city in synthwave style “a pixel art corgi pizza" "afog rolling into newyork"118 3 Multimodal architectures FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='29: Failures happen mostly for unusual prompts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Nichol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the architecture and the results it achieved are much more similar to that of GLIDE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Additionally, social media has been flooded with images generated by the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This was possible thanks to OpenAI giving access to it to everybody who was interested and patient enough to get through a waiting list.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, the model itself again remains unpublished.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another factor that might have contributed to Dall-E’s success were its inpainting and outpainting capabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Although, it is worth mentioning they were already also possible with GLIDE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In essence, UnCLIP is a very smart combination of pior work from OpenAI that was re-engineered and applied in a novel way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Nevertheless, the model represents a significant leap forward, which is why it cannot be omitted in this chapter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Dall-E 2 system UnCLIP consists of two components: prior and decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Let x be the image and y its caption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' zi and zt are CLIP image and text embedding of this (x, y) pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Then, prior P(zi|y) is responsible for producing CLIP image embeddings conditioned on the text caption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A decoder P(x|zi, y) outputs an image conditioned on the CLIP image embedding and, again, the text caption itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For the prior authors try two different approaches, namely autoregressive and diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The latter ended up yielding slightly better results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The diffusion prior isa Transformer taking as an input a special sequence of an encoded text prompt, CLIP text embedding, embedding for the diffusion step, and a noised CLIP image embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The decoder consists of diffusion models again.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Firstly, a GLIDE-like model takes a CLIP image embedding as its xt instead of the pure noise that was used in its original version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Similarly to the original GLIDE, classifier-free guidance is applied, however with slight differences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Lastly, two diffusion upsampler models are trained to bring images first from 64x64 to 256x256, and then from 256x256 to 1024x1024 resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The authors found no benefit in conditioning "an illustration of a cat "a bicyclethathas continuous that has eight legs" tracks instead of wheels" "a mouse hunting a lion" "a car with triangular wheels"3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Text2Image 119 these models on text captions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Finally, unCLIP can be summarized as a mixture of GLIDE and CLIP with a lot of engineering behind it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Results When compared to GLIDE, unCLIP shows it is capable of representing a wider diversity of the data, while achieving a similar level of photorealism and caption similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Comparison to previous works on the MS-COCO dataset shows that unCLIP achieves unprecedented FID (Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='30).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A few output examples calculated on MS-COCO captions can be found in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='30: Comparison of FID on MS-COCO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The best results for unCLIP were reported with the guidance scale of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Limitations UnCLIP suffers from very similar problems as its predecessor GLIDE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' First, compositionality in the images tends to sometimes be confused by the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Failure cases can be seen in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Second, UnCLIP struggles with gener- ating coherent text inside an image (Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='33).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The authors hypothesize that using CLIP embeddings, although improving diversity, might be responsible for making these problems more evident than in GLIDE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Lastly, UnCLIP often fails with delivering details in highly complex scenes (Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='34).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Again, according to the authors, this might be a result of the fact that the decoder is producing only 64x64 images which are later upsampled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 Imagen & Parti Only a few months after unCLIP was released by OpenAI, for the first time Google came into play with its new autoregressive model called Imagen (Saharia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another one followed just two months later - Parti (Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Both of these models pushed the boundaries even further, although they take entirely different approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' None of them is introducing a completely new way of looking at the problem of text-to-image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Their advancements Model FID Zero-shot FID Zero-shot FID (filt) AttnGAN (Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017) 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='49 DM-GAN (Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019) 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='64 DF-GAN (Tao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020) 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='42 DM-GAN + CL (Ye et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='79 XMC-GAN (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='33 LAFITE (Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='12 Make-A-Scene (Gafni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022) 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='55 DALL-E (Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) ~ 28 LAFITE (Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='94 GLIDE (Nichol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='24 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='89 Make-A-Scene (Gafni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022) 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='84 unCLIP (AR prior) 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='63 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='08 unCLIP (Diffusion prior) 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='39 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='87120 3 Multimodal architectures FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='31: Image samples on MS-COCO text prompts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Text2Image 121 FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='32: ‘a red cube on top of a blue cube’ Figure from Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='33: ‘A sign that says deep learning.’ Figure from Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' come from engineering and further scaling existing solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, it must be stressed that currently (September 2022) they are delivering the most outstanding results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Imagen is a diffusion model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Its main contribution is that instead of using a text encoder trained on image captions, it actually uses a huge pretrained NLP model called T5-XXL (Raffel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019b) that is taken off the shelf and frozen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Authors argue that this helps the model understand language much more deeply, as it has seen more diverse and complex texts than just image captions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On the other hand, Parti takes an autoregressive approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Similarly to the first version of Dall-E, it consists of two stages, namely the image tokenizer and sequence-to-sequence autoregressive part which is responsible for gener- ating image tokens from a set of text tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this case, ViT-VQGAN (Yu 8 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (a) unCLIP (b) GLIDEDiep Deinp Deep DSNEELH Lerpt: Deep122 3 Multimodal architectures FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='34: ‘A high quality photo of Times Square.’ Figure from Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) is used as a tokenizer and the autoregressive component is again Transformer-like.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Results Both of the models improved the FID significantly compared to the previous works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='35 shows the comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='35: Comparison of FID on MS-COCO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Samples from Parti can be seen in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They are included here on purpose - this is the current state-of-the-art as of the moment of writing!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Limitations MS-COCO FID (↓) LN-COCO FID () Approach Model Type Zero-shot Finetuned Zero-shot Finetuned Random Train Images [10] 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='47 RetrievalBaseline 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='97 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='82 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='59 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='48 TReCS [46] GAN 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='70 XMC-GAN [47] GAN 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='33 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='12 DALL-E [2] Autoregressive ~28 CogView [3] Autoregressive 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 1 CogView2[61] Autoregressive 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 GLIDE [11] Diffusion 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='24 Make-A-Scene [10] Autoregressive 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='84 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='55 DALL-E 2 [12] Diffusion 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='39 Imagen [13] Diffusion 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='27 1 Parti Autoregressive 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='23 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='22 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='97 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='393.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Text2Image 123 FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='36: Selected samples from Parti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022b) mention an extensive list of problems, with which Parti still struggles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At this point, all of them can be treated as a set that is common to almost all available models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Among others, they touch: feature blending (where features of two different objects are missed) omission or duplicating details displaced positioning of objects counting negation in text prompts and many many more.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These flaws pose a challenge for future research and undoubtedly they are the ones that need to be addressed first to enable another leap forward in the field of text-to-image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 Discussion Lastly, it is important to mention a couple of different topics, or trends, which are intrinsically linked with text-to-image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Together with previous ACE BE CELLEN CFI LENTTO F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=" The aying “BE EXCELLENT TO EACH OTHER'124 3 Multimodal architectures sections, they should give the reader a holistic view of where research currently stands (again, as of September 2022)." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Open- vs closed-source The first trend that has emerged only recently is AI labs to not open-source their state-of-the-art models and training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is in clear opposition to how the entire AI community was behaving from the very beginning of the recent Deep Learning era Apparently, possible commercial opportunities that come along with owning the software are too big to be ignored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The trend is very disruptive - it is clear that the community is currently witnessing the maturation of AI business models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Needless to say, it is followed by all the greatest AI labs, just to name a few: OpenAI, DeepMind, Google Brain, Meta AI, and many others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As long as commercial achievements will have an edge over academic community research, it is highly doubtful that the trend will be reversed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, it needs to be stressed that all of them are still issuing more or less detailed technical specifications of their work in the form of scientific papers, which is definitely a positive factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' We, as a community, can only hope it will not change in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Open-Source Community As the trend of closed-sourceness is clearly visible across many Deep Learning areas, the text-to-image research is actually well represented by an open-source community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The most important milestones of the recent years indeed come from OpenAI, however, new approaches can be seen across a wide community of researchers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Many of these models are public, meaning that any user with minimal coding experience can play with them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Although we decided not to go into details of particular works, it is important to name a few that became the most popular: VQGAN-CLIP (Crowson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022) Midjourney (Midjourney, 2022) Latent Diffusion (Rombach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) Stable Diffusion (Rombach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022) Potential applications Image generation that can be done in a controllable manner has undoubtedly huge potential for commercialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Although the field is currently still very immature, hypotheses about which industries might be disrupted are emerging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Essentially, every branch that has to do with generating visual art, be it static images or videos, should observe the trend closely.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Graphic design, movie making, stock photos - just to name a few that might be interested.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Currently, experimental use cases in the area of texture synthesis, product design, or building virtual reality worlds can already be observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' AI, even if still incapable of generating the final product, can help automate a significant part of the production chain, which essentially means time and money savings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Images supporting Language Models 125 The inpainting and outpainting capabilities of recent models play a significant role in this trend.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Although it is still very hard to judge which direction it takes in the future, it will definitely be a very interesting and disruptive change.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Who wouldn’t like to see movies being soon generated directly from a book’s text, pixel value by pixel value?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Ethics / Conclusion Automated image generation poses an array of serious questions of ethical character.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Fortunately, many of them are already very well recognized by the community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example, OpenAI elaborates extensively on the risks and limitations of their Dall-E 2 in this blog post by Mishkin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A few of the most important topics are presented here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The first and very significant risk is the potential misuse of the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Fake im- age generation can easily be used for harassment and disinformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Especially combined with inpainting, which is capable of erasing or adding objects to real scenes, it poses a non-trivial challenge for researchers on how to responsibly share their work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another important area touches on biases and stereotypes which are intrinsi- cally built into the technology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Obviously, a model combines concepts from the data it has seen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, if this area is to be commercialized, it needs to ensure broader diversity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An interesting example of Dall-E 2 samples can be seen in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In order to fully enable AI generation, the problem of copyrights needs to be solved in the first place.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is definitely not clear who is the author of generated images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Is it the person who came up with a text prompt and ran the model?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Is it a model engineer?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The author of the model’s architecture?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The owner of the data it has been trained on?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Or maybe the model itself?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another question is what really is a creative contribution and eventually should result in copyright being granted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These and many others definitely require extensive debate and hopefully, legal solutions following it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Images supporting Language Models Author: Giacomo Loss Supervisor: Matthias Aßenmacher 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Words In (Non-Symbolic) Contexts Imagine you were alone in a foreign country, you could not speak the language and the only resource you had were a dictionary in the foreign language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' You see 126 3 Multimodal architectures FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='37: Biased samples from Dall-E 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure from Mishkin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' a word written on a sign but you cannot understand its meaning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' What could you do?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One idea would be do open the dictionary and look the word up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The problem is that the word is defined by using other words in the foreign language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As a second step you would thus look these new words up and continue like that in further steps to the “infinity and beyond” (cit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Buzz Lightyear).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But even after looking every single word in the dictionary up, you would still not be able to understand the meaning of the word written on the sign.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' If on that sign, next to the unknown word, something else was instead depicted, for example an image of a fork and a knife, you might speculate that the word indicates something which has to do with food, like a restaurant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' And this without explicitly knowing the meaning of the word.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This example is inspired by the work of Stevan Harnad, which formulated at the beginning of the 90’s the so called Symbol Grounding Problem (Harnad (1990)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It asserts that it is not possible to understand the meaning (semantics) of a word by just looking Prompt: nurse;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Date: April 6, 20223.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Images supporting Language Models 127 at other words because words are essentially meaningless symbols.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is possible to understand the meaning only if the word is put in a context, a perceptual space, other than that of written language: the word must be grounded in non-symbolic representations, like images, for example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Over the past 10 years there has been a whopping development of distributional semantic models (DSMs, henceforth), especially after the Word2vec (Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2013b)) revolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This family of models assumes that the meaning of words and sentences can be inferred by the “distribution” of those words and sentences within a text corpus (the Distributional Hypothesis formulated by Harris et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (1954)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But the Symbol Grounding Problem mentioned earlier suggests that DSMs do not resemble the way words are learned by humans, which is in multimodal perceptual contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For these reasons, models have been developed with the goal to integrate further modalities (like visual ones) in pure language models, assuming that grounding words and sentences in other perceptual contexts should lead to a better understanding of their semantics and, as a result, to better performance in pure language tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The focus of this subchapter are models which empower pure language models with visual modalities in form of images: their goal is to obtain better semantic representations (in form of embedding vectors) of words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' First, a quick recap of the main pure language models will be provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' After that, the historical evolution of the integration of images as visual modalities into pure language models will be discussed: from simple concatenation of textual and visual modalities, to the projection of visual elements in a common grounded space and more recently, the use of Transformers (see figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='38).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Eventually, a comprehensive evaluation of the different models against benchmarks will be carried out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Again, the focus is on how to employ visual elements to obtain embeddings able to capture the semantics of words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' More concrete applications, such as those in the field of machine translation are out of scope and will be only marginally addressed at the end of the subchapter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='38: Historical evolution of models which integrate visual infor- mation into pure language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Sequential Embeddings Grounded Embeddings Transformers Vokenization Hill et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Collell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VisualBERT FLAVA XDBERT Bruni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VILBERT UniT Kiela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Kiela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' MM Skipgram I LXMERT Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' UFO Bordes et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' UNITER FLAMINGO Shahmohammadi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 128 3 Multimodal architectures 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Word-Embeddings: Survival-Kit In other parts of this books, the most important NLP-models and the latest developments in the field are extensively described.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this section, some information will be provided, which might be helpful to understand some of the aspects discussed in this subchapter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As it may have been inferred in the introduction, the starting point is always a pure language model, namely a model which employs only textual inputs in order to generate word embeddings, which are representations of words in form of numerical vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The most widely used pure language models in the papers presented in this subchapter are the following three: Skipgram (Word2vec, Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2013b)), where given a target word, the probability of the neighboring (surrounding) words in a pre-defined window has to be maximized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Trainig takes place either through a hierarchical softmax or through negative sampling, which involves maximizing the probability of words which are real neighbors and minimizing that of words which are not real neighbors (the “negative samples”) GloVe (Pennington et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014)), which is based on words co-occurrence across the entire corpus, with the goal of minimizing the difference between the dot product of the embedding vectors of two words and the logarithm of the number of co-occurrences BERT (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2018c)): two pre-training tasks to obtain word- embeddings: – Masked Language Modelling (MLM): given a sentence with [MASK]ed tokens, the goal is to predict these masked tokens – Next Sentence Prediction (NSP): given two sentences A and B, the goal is to predict if B follows from A Two additional remarks to conclude this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' First, Skipgram and GloVe generate embeddings which are “context-free”: they do not take into account the context in which words occur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On the contrary, BERT is designed to represent words given the context (sentence) in which they occur: we can thus have different embeddings for the same word, depending on the context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Second, the inputs of these models are tokens: with the help of a tokenizer, which can be different for different models, the text is split in “chunks”, called tokens (and they are not necessarily single words).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 The Beginning: Sequential Multimodal Embeddings Supposing we add linguistic and visual feature representations related to a particular word, how could we fuse them?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One intuitive idea would be to concatenate the textual and visual modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Let Vtext be the textual (vectorial) representation of a word and let Vimg be its visual (vectorial) representation, a fused representation F of a certain word w might take the following simplified form: 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Images supporting Language Models 129 F = γ(Vtext) � (1 − γ)Vimg where γ is a tuning parameter which controls the relative contribution of both modalities to the final fused representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Bruni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014) propose a model where the meaning of a target word is represented in the form of a semantic vector and all vectors are collected in a text-based semantic matrix;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' textual embeddings are computed based on (transformed) co-occurrence counts of words in a pre-defined window.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The starting point to obtain an image-based representation of certain target word is a dataset of labeled images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For each image associated to the target word (which means that the target word is to be found in the image’s caption), low-level features called “local descriptors” - which incorporate geometric information of specific areas of a certain picture are extracted and then these descriptors are assigned to clusters (bags) of “visual words”1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Afterwards, for each target word, visual word occurrences are summed up together to obtain the occurrence counts related to the target word.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These image-based semantic vectors are then transformed and collected in an image-based semantic matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The two matrices are then concatenated and projected into a common latent multimodal space with a singular value decomposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Thanks to this process a textual mixed matrix and a visual mixed matrix are extracted and then combined together according to different fusion strategies to build the multimodal embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this first, relatively cumbersome (historically motivated) example, the vector representation of an image is obtained with non-trivial features engineering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In recent years, the use of neural networks has made an “automatic feature selection” possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is what for example Kiela and Bottou (2014) propose, extracting visual features from the first seven layers of a convolutional neural network (proposed by Krizhevsky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2012b)) trained on 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 million images from the ImageNet database (Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2009)), which produces scores for 1,512 object categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The linguistic part of the model relies on the Skipgram model by Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2013b) and consists of 100-dimensional vector representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The multimodal representation is again obtained by concatenation of both modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another notable example of concatenation/sequential combination of textual and visual modalities is the work of Silberer and Lapata (2014): textual and visual modalities are represented by separate vectors of textual and visual attributes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' During training, these textual and visual inputs vectors are separately fed to denoising (unimodal) autoencoders, the training objective of which is the reconstruction of a certain corrupted input - e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' through masking noise - from a latent representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Their outputs are then jointly fed to a bimodal autoencoder to be mapped to a multimodal space, on which 1See for example Bosch et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2007) for more details on this technique, called “bag-of- visual-words”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 130 3 Multimodal architectures FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='39: From Kiela and Bottou (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Textual and visual features vectors are concatenated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' a softmax layer (classification layer) is added, which allows the architecture to be fine-tuned for different tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 The Grounded Space The aforementioned models assume implicitly a one-to-one correspondence between text and images: a visual representation is extracted only from words which are associated to a concrete image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is a limitation, for two partially overlapping reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One one hand, how can we depict words for which no image is available in our training set?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Is it possible to imagine visual representations purely from linguistic ones?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On the other hand, could we hypothetically find a visual representation for each word?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This might be true for concrete words but when it comes to abstract ones, it is not always possible to find suitable visual representations or, said in other terms, many words are not visually grounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For this reasons, researches have addressed the question: could we map textual and visual elements to a grounded space and design models able to generalize images and words beyond those in the training set?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Well, the answer is yes!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Lazaridou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2015) propose a multimodal Skip-gram architecture where the objective function of a Skip-gram is “augmented” with an additional visual objective: 1 T T � t=1 (Lling(wt) + Lvision(wt)) Training visual features (after Oguab et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=',2014) Convolutional layers Fully-connected layers Imagenet labels C1-C2-C3-C4-C5 FC6 African elephant EC FC8 6144-dim Wall clock feature vector Multimodal word vector Select images C1-C2-C3-C4-C5 Aggregate FC6 FC7 from ImageNet or ESP 6144-dim feature vectors Word 100-dim word projections 100-dim word projections w(t-2) w(t-2) w(t) w(t+1) w(t+2) Training linguistic features (after Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2013)3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Images supporting Language Models 131 where Lling is the Skip-gram loss function and Lvision is the additional visual loss for the target word wt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In particular,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Lvision has the form of a hinge loss,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the goal of which is to make the (vectorial) linguistic representation of a certain word more similar to its visual representation: Lvision(wt) = − � w′∼Pn(w) (max(0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' γ − cos(zwt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' vwt) + cos(zwt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' vw′)) where vw′ is a visual representation of a randomly chosen word w ′ (drawn from a probability distribution Pn(w)) used as negative sample,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' vwt is the corresponding visual vector and zwt is the target multimodal word representa- tion which has to be learned by the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is nothing more than a linear transformation of a word representation uwt: zwt = M u→vuwt and M u→v is a cross-modal mapping matrix from linguistic inputs to a visual representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is important to remark that during training, for words which do not have associated images, Lvision gets set to zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' When this cross-modal mapping matrix is estimated, it is then possible to find a visual representation for new words, which do not have a related image in the training set: the model allows to imagine new words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is what is meant with grounded space: a perceptual (visual, in this case) space where a word is grounded, put in context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='40: From Lazaridou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The linguistic embedding of the word ‘cat’ is mapped to a visual space, such that the similarity of vector representations of words and associated images is maximized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Similar instances of a cross-modal mapping can be found for example in Kottur et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2016) (a multimodal extension of the CBOW model specification of word2vec) and in Collell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017), where visual features are obtained from the forward pass of a CNN, pre-trained on ImageNet (Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2009)) and the cute little sat on the mat CAT Cing(wt) = maximize context prediction Cvision(wt) = maximize similarity maptovisual space cat Mu-y132 3 Multimodal architectures a mapping function from the textual space to the visual space is obtained as a result of the training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Also in this case it is possible to generate a visual representation from the embedding of a certain word, not necessarily present in the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In particular, they propose two specifications of the mapping function: a simple linear mapping and neural network with a single hidden layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Last but not least, Hill and Korhonen (2014) recognize that concrete nouns are more likely to have a visual representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For this reason, they map a set of concrete words (CSLB, Devereux et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014)) to “bags of perceptual/visual features” and every time one of these words is encountered during training, the Skip-gram model they are using stops training on that sentence and instead continues the training on a newly created “pseudo-sentence”, which takes into consideration the aforementioned bag of perceptual features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This list is unfortunately not exhaustive and there are other models with similar ideas, for example Ailem et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2018) or Kiros et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The aforementioned papers and related models focus on the modeling of semantics of words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Nonetheless, there are models designed to address tasks at sentence-level, such as sentiment analysis or sentence entailment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Kiela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017) employ a bidirectional Long Short-Term Memory (LSTM, Hochreiter and Schmidhuber (1997)) architecture to model sentence representations, in order to gain information from the text in both directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The goal is again to encode a sentence and ground it in an image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Textual embeddings are obtained with GloVe (Pennington et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014)) and they are then projected on a grounded space with a linear mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This grounded word vector serves as input for the bidirectional LSTM, which is trained together with the linear mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Their model is versatile and depending on the loss function specification, it can not only propose alternative captions to an image (which is a way to frame sentence equivalence tasks) but also predict captions from images or perform both tasks at the same time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This last point highlights an important characteristic of many of the models discussed in this subchapter: even though the focus is on the empowerment of pure language models with the addition of visual elements, some of the models discussed here can be used for purposes other than pure language tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The control over which task is performed is usually exercised by either specifying different loss functions (as in the last model described) or setting properly certain hyperparameters (such as in the previously described model by Silberer and Lapata (2014)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 The Transformers Era A turning point for the field of NLP was Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017b)’s paper “Attention is all you need”, where the authors proposed for two machine translation tasks a novel architecture, the Transformer (not to be confused with the giant robots from the Michael Bay’s blockbuster movies!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ), which leverages only the attention mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Even though an exhaustive description of the 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Images supporting Language Models 133 Transformer architecture is beyond the scope of this subchapter, it is worth mentioning why they became so popular over the past four years in the field of NLP (among others), in comparison to Recurrent Neural Networks (RNNs) and Long Short-Term Memory networks (LSTMs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Well, the three main properties of Transformers are the following: Self-Attention Parallel input processing Positional embeddings2 When feeding for example a textual sentence to a RNN, the network deals with one word after the other in a sequential fashion and one of the known issues is the fact that information contained in earlier parts of the sequence tend to “fade away” as the sentence is analyzed further: newer inputs carry a larger influence on the outputs at a given step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' LSTMs try to mitigate this problem by introducing a component called “gate”, which regulates the information flow, namely which information from the past inputs need to be “remembered” by the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The goal is to capture long-term dependencies among different parts of the sentence fed into the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On the contrary, thanks to the Self-Attention mechanism, at each step Trans- formers can access previous steps, thus limiting to a minimum the loss of information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Moreover, inputs are processed not sequentially but all at the same time, thus allowing to capture dependencies by looking at the sentence as a whole and this could make a fundamental difference in many down- stream applications: for example in the German language, in dependent clauses (“Nebensaetze”), the verb comes at the end of the phrase but it determines the verbal case of the nouns that come before the verb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Thus Transformer could potentially capture the dependencies between the verb coming at the end of the sentence and the words at the beginning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Lastly, Transformers encode for every input information on its position within a sentence, since it is often the case, that the importance and meaning of a certain word varies depending on its position within a sentence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These were the Transformers, in a nutshell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But Transformers did not only bring a change of paradigm in terms of architec- tures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' First, while for models in the pre-Transformers era described before, the focus was on the ability of word embeddings to capture similarity among words, now the focus has shifted more on downstream tasks (more on this later in the evaluation section), encompassing not only pure linguistic ones but also tasks with visual components, such as for example, visual question answering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is now more difficult (but not impossible) to draw a line between models where “images support pure language models” (the object of this subchapter) and models which could be actually categorized as “vision and language” models but they can be employed also to solve pure linguistic tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This issue brings 2It may be argued that this point is a necessity to be able to work on sequences rather than a strength.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 134 3 Multimodal architectures another peculiarity of many Transformers-base models,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' namely their “universal vocation”: without loss of generality we could say that the idea is now to design powerful (multimodal) pre-training (mostly self-supervised) tasks capable of generating task-agnostic representations,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' whose encoded knowledge can be efficaciously transferred to diverse downstream tasks,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' limiting the amount of labeled data necessary to fine-tune the models (this is the so-called few-shot learning).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Let’s briefly discuss two examples, Flava (Singh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022)) and UniT (Hu and Singh (2021a)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Flava has two separate encoders for images and text and a multimodal encoder, all based on the Vision Transformer (Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020a)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Unimodal pre-training consists of masked image modeling (where a set of image patches are to be reconstructed from other unmasked image patches) and masked language modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Multimodal pre-training tasks consist instead of a global contrastive loss (maximization of cosine similarities between paired images and text), a masked multimodal modeling (where image patches and text tokens are masked) and an image-text matching task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The model is pre-trained jointly on unimodal and multimodal datasets and then evaluated (fine-tuned) on 22 vision tasks, 8 pure linguistic tasks and 5 vision and language tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' UniT has an image encoder and a text encoder, a multimodal domain-agnostic decoder and task-specific heads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' There is no pre-training on multimodal data and the model is trained end-to-end on 7 tasks (vision, language and vision an language) and 8 datasets, with the idea that solving different tasks across domains in a jointly fashion should prevent general knowledge from being lost due to fine-tuning over particular downstream tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These two examples clearly show what it is meant by “universal vocation” of many modern Transformer-based models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But there are still models specifically designed to solve pure language tasks and in the following pages, two of them will be described.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Vokenization It is often difficult for a child to describe the meaning of a certain word.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A child might not be able to describe what a lion is but if he is given pictures of different animals he might be very well able to point at the picture of a lion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Visual pointing could thus act as a form of supervision to natural language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Is it possible to build within a pure language model a form of visual supervision, which mimics the visual pointing often adopted by children?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is exactly the problem that Tan and Bansal (2020) try to address: how to associate to each textual representation (token) a visual representation (Voken).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Let’s suppose we had a dataset of word(token)-image pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' We could integrate in the pre-training framework of pure language models the following Voken- Classification task: 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Images supporting Language Models 135 LV OKEN−CLS(s) = − l � i=1 log pi(v(wi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' s)|s) h1, h2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', hl = languagemodel(w1, w2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', wl) pi(v|s) = softmaxv{Whi + b} where {hi} is the feature representation of each token in a sentence s = {wi} extracted from a language model (such as BERT) and the vokens originate from a finite set of images X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Each hi is then transformed into a probability distribution through a softmax layer, with the voken-classification loss defined as the negative log-likelihood of all related vokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The model architecture would then be: FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='41: From Tan and Bansal (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Visually supervised the lan- guage model with token-related images, called Vokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Everything sounds fantastic!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' There is only one small pitfall: a set of X of images for all tokens does not exist!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Could we find a proxy for such a set?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One might consider image-captioning datasets such as MS COCO (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014b)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But also this suboptimal solution is problematic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The Grounding Ratio is defined as the proportion of tokens in a dataset which are related to a specific visual representation (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the tokens are visually grounded), such as “dog”, “table” and the like.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='42 it is striking that only around one third of tokens contained in pure language corpora such Wiki103, English Wikipedia and CNN/DM are visually grounded in image captioning datasets3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is not possible to rely (only) on image captioning datasets to build the Voken-Classification task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But the fact that a word/token does not have a visual representation in one of these datasets, it does not mean that it is not possible to visually represent the word/token.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Would it be possible to associate images to words/tokens not directly visually grounded?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Well, the answer is yes!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3From an operative point of view, the authors consider a token type “visually grounded” if it has more than 100 occurrences in MS COCO Visual Vokens (Token-Related Images) Supervision nglish Visually- Supervised Language Model Vokenization Humans learn language by listening, speaking Language Language Tokens Input136 3 Multimodal architectures FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='42: From Tan and Bansal (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Statistics of image-captioning dataset and other natural language corpora.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VG, CC, Eng Wiki, and CNN/DM denote Visual Genome, Conceptual Captions, English Wikipedia, and CNN/Daily Mail, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' JSD represents Jensen–Shannon divergence to the English Wikipedia corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='43: From Tan and Bansal (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The Vokenization process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A contextualized image (visual token, Voken) is retrieved for every token in a sentence and with this visual token, visual supervision is performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The Vokenization is a process to assign every token wi contained in a sentence s to a visual representation (called voken) originating not from a generative model but rather from a finite set of images X = {x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', xn}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The voken v(wi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' s) is the image from X which maximizes the following Relevance Score Function: v(wi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' s) = arg maxx∈Xrθ∗(wi, x, s) This function takes into account not only the token wi itself, but also the context (the sentence) and it is parametrized by θ with θ∗ being the optimal value (which has to be estimated).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 The Relevance Score Function: Model, Training, Inference The Relevance Score Function is defined as the inner product of the language feature representation fθ(wi, s) and the visual feature representation gθ(x): Dataset # of Tokens # of Sents Vocab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Size Tokens #/ Sent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 1-Gram JSD 2-Gram JSD Grounding Ratio MS COCO 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0M 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6M 9K 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='27 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8% VG 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2M 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3M 13K 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='28 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6% CC 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9M 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8M 17K 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='20 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7% Wiki103 111M 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2M 29K 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='05 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6% Eng Wiki 2889M 120M 29K 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='00 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7% CNN/DM 294M 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9M 28K 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='10 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3%Vokenization Visual Supervision Nearest Neighbor Search Vokens Visually- Supervised Visual Language Language Encoder Encoder Model Language Tokens Input Image Language Tokenizer Set Corpus3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Images supporting Language Models 137 fθ(wi, s)T gθ(x) Supposing h1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', hl and e are the embeddings originating from pre-trained language and visual encoders respectively (in the paper the authors use BERT and ResNeXt), the language and visual representations are obtained first by applying multi-layer perceptrons w_mlpθ and x_mlpθ to downproject the embeddings from the pre-trained models to a common vector space and secondly they are normalized (with L2-Norm): fθ(wi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' s) = w_mlpθ(hi) ||w_mlpθ(hi)|| gθ(x) = x_mlpθ(e) ||x_mlpθ(e)|| With respect to the training of the model, to estimate the optimal value for the parameter θ, image-captioning datasets, which are collections of sentence-image pairs, are employed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Operationally, for every sentence sk associated to image xk in the image-captioning dataset, each token wi in s is associated to xk and the hinge loss is used to estimate the optimal value of θ∗: Lθ(s, x, x′) = l � i=1 max(0, M − rθ(wi, x, s) + rθ(wi, x′, s)) The goal is to maximize the Relevance Score Function between aligned token- image pairs (wi, x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' s) and to minimize the score for unaligned pairs (wi, x ′;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' s) by at least a margin M, with x ′ being a randomly sampled image from the image captioning dataset not associated to sentence s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Once we have the language feature representation fθ(wi, s) for each token in our language corpus and the optimal estimate of θ, how is it possible to find the image x encoded with the visual feature representation gθ(x), which maximizes the Relevance Score Function?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As said earlier, the function is expressed as the inner product of the textual and visual representations and since the feature vectors have euclidean norm equal to 1, the inner product maximization problem is equivalent to a nearest neighbor search problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is just sufficient to find the vector gθ(x) which is the nearest neighbor of fθ(wi, s)4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With this process, it is thus possible to assign a visual representation, a voken, to any word/token in a language corpus, pooling from a finite set of images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The problem of the low Grounding Ratio outlined above is solved and the Voken-Classification task could be integrated in the pre-training framework 4The proof is straightforward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Let X ∈ Rl and have euclidean norm equal to 1, which means ||X||2 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the nearest neighbor search we need to find the vector Y ∈ Rl, also with norm equal to 1, which has minimal euclidean distance with X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is the quantity to 138 3 Multimodal architectures of any pure language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Moreover, the authors propose a method called Revokenization, which allows to transfer vokens generated using a particular tokenizer to frameworks which employ other tokenizers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 One Step Further: The Power Of Imagination Wikipedia defines imagination as “the production or simulation of novel objects, sensations, and ideas in the mind without any immediate input of the senses”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Indeed, humans do not only associate words with real images, but also leverage the ability to imagine words/concepts: imagination can help the human brain solve problems with limited supervision or sample points by empowering its generalization capabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Until now we discussed language models supported by visual information in form of real images (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' those retrieved from image- captioning datasets).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But with the recent advancements in the field of generative models for images, it is for sure worth investigating if these generative models can help pure language models to produce better representations of words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In particular, the framework proposed by Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022), iACE (Imagination- Augmented Cross-Modal Encoder) will now be discussed: the idea is simply to use a generative model to obtain a visual representation of a textual input and then use these imagined representations as “imagination supervision” to pure language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This framework has two main components: the imagination generator G: given an input text x, VQGAN (Esser et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021)) is used to render an “imagination” i of x and CLIP (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021a)) is used to see how well the generated image i is aligned to the input text x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This generative framework is known as VQGAN+CLIP Cross-modal Encoder Ec: the input text and the rendered imagination are firstly encoded with a language and a visual encoder respectively and then be minimized: d(X,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Y ) = � � � � l � i=1 (xi − yi)2 squared = l � i=1 x2 i + l � i=1 y2 i − 2 l � i=1 xiyi = ||X||2 2 + ||Y ||2 2 − 2XT Y Norm−1 = 1 + 1 − 2XT Y = 2(1 − XT Y ) And through these simple algebraic manipulations,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' it is possible to see that minimizing the euclidean distance between X and Y is equivalent to maximize XT Y ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' which is the inner product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This proves the equivalence between inner product maximization and nearest neighbor search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Images supporting Language Models 139 FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='44: From Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The generator G visualize imaginations close to the encoded texts by minimizing LGAN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The cross-modal encoder Ec learns imagination-augmented language representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Two-step learning procedure consists of: 1) pre-train a Transformer with visual supervision from large-scale language corpus and image set, 2) fine-tune the visually supervised pre-trained Transformer and the imagination-augmented cross-modal encoder on downstream tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' CLIP is employed as cross-modal encoder with inputs being text-imagination pairs The learning procedure is composed of two main steps (depicted in figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='44): the first step consists in the pre-training of a visually supervised Transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In particular, the Voken-Classification task described before is employed, alongside a masked language modeling task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is the baseline model, where no information from the “imagination” procedure comes yet into play.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The second step is the imagination-augmented fine-tuning with two downstream datasets D (GLUE, Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2018) and SWAG, Zellers et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2018)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On one side,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the visually-supervised Transformer (the baseline) relies only on the textual input during the fine-tuning phase and the following loss function is employed: LLang = − |D| � j=1 K � k=1 yk log pk(dj(t)|D) On the other hand,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the iACE is trained to minimize the following cross-entropy loss: LImagine = − |D| � j=1 K � k=1 yk log pk(dj(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' v)|D) Step l:Pre-training on Large-scale Language and Vision Datasets Language Image Corpus Set Step 2: Fine-tuning on Downstream NLU Tasks LLang Visually Text Input Supervised Limagine Transformer Cross-Modal Encoder Language Encoder Language Generator Imagination Vision Encoder Encoder LGAN140 3 Multimodal architectures with t and v being the textual and imagined features representations respec- tively,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' j indicates the j-th data sample in dataset belonging to dataset D,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' K is the number of classes and pk is the conditional distribution of dj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Training takes place in a jointly fashion and both losses, the imagination-augmented one LImagine and the pure language loss LLang are linearly combined, with λ being a balance factor: L = λLImagine + (1 − λ)LLang To sum up, this model-agnostic framework uses generated images for visual supervision and could be integrated on top of pure language models (such as BERT) or visually supervised models (such as the Voken model, which uses Vokens, real images for visual supervision).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 Was It Worth?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this subchapter we investigated how visual inputs can support pure language models in capturing the semantics of words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' We started with simple concate- nation of linguistic and visual features and ended up with Transformer-based models, which are able to shape different word embeddings for the same word by taking into account also the context (the sentence).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But now the question arises: with the addition of visual information, do we obtain word embeddings that are better than those from pure language models?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In other words, is what we all have so far discussed worth?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Well, as it is often the case in scientific research, the answer is: “it depends!”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Individual evaluation of each single model might not be ideal because each model has its peculiarities and it is impractical to make a direct comparison among them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is more useful to capture and discuss the themes which are common to many models, in order to understand their strengths and weaknesses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is how we will proceed and we will also differentiate between evaluation before Transformers and evaluation after Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Evaluation In The Pre-Transformers Era Before the advent of Transformers, the evaluation focus was on the degree of alignment between learned semantic representations (word embeddings) and representations by human speakers, in form of correlation between model-based and human-based word-similarity judgments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Three main types of similarity are usually considered: Semantic similarity, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' “pasta is similar to rice” Semantic relatedness, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' “Bear is related to mountain” Visual similarity, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' “cucumbers look like zucchinis” The evaluation pipeline could be summarized as follows: 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Images supporting Language Models 141 FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='45: Pipeline for intrisinsic evaluation of semantic representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the first step, the cosine similarity between two word embeddings w1 and w2 is used as similariry measure and in a second step, the correlation with human speakers’assessment is computed to gauge the quality of the embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The higher the correlation, the better the embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Word embeddings are vectors and to measure the degree of similarity between two vectors, the Cosine Similarity is often used in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In an ideal setting, we would have word embeddings with the following characteristics: if two words are semantically similar, the two embedding vectors should be similar and their cosine similarity should go towards 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' If the two words are unrelated, the embedding vectors should be orthogonal to each other and as a consequence, the cosine similarity should go towards zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Lastly, if two words are negatively related, the two embedding vectors should point at opposite directions and the cosine similarity should go towards -1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Once these similarity measures between word pairs are computed, in order to measure the quality of the embeddings several benchmarks can be employed, such as MEN (Bruni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014)), WordSim353 (Agirre et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2009)) and SimLex999 (Hill et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2015)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' These datasets could be described as collections of word pairs and associated similarity ratings by human speakers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Operationally, this means that real people were asked if a pair of words was related or not and to which degree, on a scale between -1 (negatively related) to +1 (semantically equivalent).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The higher the correlation between the cosine similarity and the similarity judgments by humans, the higher the quality of the word embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Having done this methodological premise, let’s discuss the performance of these pre-Transformer models!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Since the goal of these models is to enhance pure language models with the addition of visual inputs, the baseline in the evaluation is always one (or more) pure language model(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Well, do visually grounded embeddings outperform non-grounded ones?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' What emerges from virtually all papers is that visual grounding can actually help get a better semantic representation of concrete concepts, such as “cat”, “table”, “bicycle”, whereas they do not help much with the representation of abstract concepts such as “love” and “peace”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Evaluation In The Post-Transformers Era A limitation of the intrinsic evaluation metrics is the high degree of subjec- tivity: the similarity between two concepts depends in many instances on the experience, cultural background and preferences of the human observers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is why the evaluation focus has now shifted to a more extrinsic dimension: how well do the models perform in downstream tasks?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The problem of the “lack Cosine Similarity W1: W2 Correlation with human Word pairs (w1,w2) ratings [ / W1|| / /w2]142 3 Multimodal architectures FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='46: From Hill and Korhonen (2014): Each bar represents a differ- ent model settings and the dashed line indicates the pure linguistic benchmark model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='46 we can see that pure language models still perform better than models with visual inputs when it comes to the representation of abstract nouns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another example is Kiela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017): they found that their models perform better when tested on datasets with a higher degree of concreteness and the same conclusion is reached by Collell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2017), which state that visual information can empower the representations of concepts that are to a certain extent visual.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To sum up, effective semantic representation of abstract concepts constitute the main limitation common to many of the models discussed in this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' of objectivity” is thus solved because on downstream tasks there is no room for opinions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The datasets used to train the models are also different and the most widely used are: GLUE (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2018)): 9 tasks, including single-sentence tasks (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' sen- timent analysis), similarity tasks (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' paraphrasing), inference tasks (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' tex- tual entailment) SQuAD (Rajpurkar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2016)): question/answer pairs SWAG (Zellers et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2018)): multiple choice questions about grounded situations As previously discussed, many Transformer-based models have universal voca- tion: they are built to solve a heterogeneous range of tasks from the language and vision domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' If we thus consider only performance on pure language tasks, the following two tables from Tan and Bansal (2020) are insightful: It is straightforward: unlike in the pre-Transformers Era, where grounded word embeddings could improve performance over baselines, Transformer-based universal models do not outperform pure language models such as BERT or RoBERTa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Nonetheless, the addition of visual supervision (the Voken- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='364 Propagation Method Johns and Jones Ridge Regression Our Model (α=1) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='232 lation 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='265 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='236 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='225 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='197 Corre 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='116 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' -- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 abstract nouns all nouns concrete verbs abstract verbs all verbs3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Images supporting Language Models 143 FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='47: From Tan and Bansal (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Statistics of image-captioning dataset and other natural language corpora.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VG, CC, Eng Wiki, and CNN/DM denote Visual Genome, Conceptual Captions, English Wikipedia, and CNN/Daily Mail, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' JSD represents Jensen–Shannon divergence to the English Wikipedia corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='48: From Tan and Bansal (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Fine-tuning results of different pre-trained models w/ or w/o the voken classification task (denoted as“Voken- cls”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Classification task) in the pre-training framework can boost performance above the level of pure language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Pezzelle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021) analyzed the intrinsic quality of embeddings of some vision and language (“universal”) models: From this intrinsic evaluation perspective (which was popular in the pre- Transformers Era), vision and language models do not generally outperform domain-specific models such as BERT and also in this case the only real competitor of pure language models is a model with visual supervision (again, Vokenization).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The bar plots depict correlation between human- and model-based similarity ratings, differentiating between the most concrete concepts contained in a Model Init.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' with BERT?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Diff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' to BERT Weight SST-2 QNLI QQP MNLI ViLBERT (Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019) Yes 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0e-3 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 VL-BERT (Su et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020) Yes 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4e-3 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 VisualBERT (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019) Yes 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5e-3 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Oscar (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020a) Yes 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6e-3 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 LXMERT (Tan and Bansal, 2019) No 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0e-3 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 BERTBASE (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0e-3 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 BERTBASE + Weight Noise 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5e-3 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3Method SST-2 QNLI QQP MNLI SQuAD v1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 SQuAD v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 SWAG Avg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' BERT6L/512H 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3/80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2/60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 BERT6L/512H + Voken-cls 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5/80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3/64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 BERT12L/768H 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0/85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7/71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 BERT12L768H + Voken-cls 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8/86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1/71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 RoBERTa 6L/512H 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9/61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6/52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 RoBERTa 6L512H + Voken-cls 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0/66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9/54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 RoBERTa 12L/68H 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2/79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2/63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 RoBERTa 12L/768H + Voken-cls 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0/82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9/69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6144 3 Multimodal architectures FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='49: From Pezzelle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Spearman’s rank correlation between similarities computed with representations by all tested models and human similarity judgments in the five evaluation benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='50: From Pezzelle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Correlation between model and human similarity ratings on WordSim353, SimLex999 and MEN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Each barplot reports results on both the whole benchmark and the most concrete subset of it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' certain dataset5 and the whole dataset (thus including more abstract concepts).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The results confirm the trend: multimodal models are more effective than pure language models at representing concrete words but in many instances they still lag behind when it comes to more abstract concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Last but not least, few words need to be spent on a topic which has been steadily gaining relevance: Few-Shot Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To train and test models, a large pool of paired images and texts is often needed and the creation of many of the datasets used in fine-tuning required a huge data collection effort, which had to be performed by human agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This implies that the creation of such data pools can be very costly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For this reason, there is a growing interest in creating models able to cope with low-resource settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This boils down to the question: can a model perform well on downstream tasks even with just a limited number of training examples?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The goal is actually once again, to 5See Brysbaert et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2014) for information on how concreteness of a word can be estimated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' model input Spearman p correlation (layer) RG65 WS353 SL999 MEN SVERB BERT-1M-Wiki* L 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7242 (1) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7048 (1) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5134 (3) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3948 (4) BERT-Wiki ours L 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8107 (1) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7262 (1) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5213 (0) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7176 (2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4039 (4) GloVe L 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7693 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6097 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3884 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7296 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2183 BERT L 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8124 (2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7096 (1) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5191 (0) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7368 (2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4027 (3) LXMERT LV 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7821 (27) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6000 (27) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4438 (21) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7417 (33) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2443 (21) UNITER LV 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7679 (18) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6813 (2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4843 (2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7483 (20) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3926 (10) ViLBERT LV 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7927 (20) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6204 (14) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4729 (16) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7714 (26) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3875 (14) VisualBERT AT 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7592 (2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6778 (2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4797 (4) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7512 (20) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3833 (10) Vokenization Lv 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8456 (9) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6818 (3) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4881 (9) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8068 (10) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3439 (9)WS353 SL999 MEN 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='75- concrete 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='60- concrete 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='85- concrete whole whole whole 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='70- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='55- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='80- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='65- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='60- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='70- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='55- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='40- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='65- BERT LXMERT UNITER VILBERT BERT LXMERT UNITER ViLBERTis3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Images supporting Language Models 145 mimic how humans learn: a person does not need to see one thousand pictures of a table, to be able to recognize a table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='51: From Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Model-agnostic improvement in Few- shot Setting with GLUE benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This table from Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022), where models are trained using only up to 5% of the training set, shows for example the ability for a model supervised with “imagination” (which was a generated visual representation of a certain textual input) to outperform models with only simple visual supervision (the Voken-model).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is just an example, but the ability to perform well in few- shot settings has become the touchstone of the evaluation modern multimodal models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 The End Of This Story We started this story with the Symbol Grounding Problem, which affirms that to grasp the meaning of a word, the word has to be put in a context other than the pure linguistic one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' We thus investigated some of the architectures proposed to ground words in a visual space in form of static images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The goal (hope) is to better capture the semantics of words, in form of better word embeddings, to be employed in heterogeneous tasks, from semantic-similarity to downstream tasks, such as sentiment analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' From this brief analysis it emerges that grounding words in images can actually improve the representation of concrete concepts, whereas visual grounding does not seem to add value to pure language models when it comes to abstract concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Nonetheless, forms of visual supervision like the Voken-Classification task or the employment of generative models which allow to imagine words, such as in the iACE-Framework, might be the right way to bridge this gap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The Transformers have been a revolution in the field of NLP and with their advent, the trend has now become to build models with pre-training tasks capable of generating powerful task-agnostic word representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The knowl- edge gained with these tasks can be then transferred to downstream tasks with the goal to limit the amount of labeled data necessary to fine-tune models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Labeling data is indeed costly: this is why the ability of a model to generalize SST-2 QNLI QQP MNLI Extreme Few-shot 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5% VOKEN(Bertbase) 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='70 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='98 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='73 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='54 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='60 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='96 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='10 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='65 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='46 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='31 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='62 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='79 iACE(Bertbase) 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='98 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='96 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='42 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='64 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='33 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='03 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='36 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='67 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='17 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='07 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='49 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='57 VOKEN(Robert abase) 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='99 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='10 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='86 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='37 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='23 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='78 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='32 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='25 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='18 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='59 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='76 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='23 iACE(Robertabase) 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='34 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='66 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='60 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='79 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='03 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='83 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='43 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='11 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='77 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='94 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='74 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='39 Normal Few-shot 1% 3% 5% 1% 3% 5% 1% 3% 5% 1% 3% 5% VOKEN(Bertbase) 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='40 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='01 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='75 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='17 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='36 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='19 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='55 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='37 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='50 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='45 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='73 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='35 iACE(Bertbase) 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='45 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='04 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='47 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='09 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='54 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='52 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='31 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='69 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='52 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='15 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='43 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='73 VOKEN(Robert abase 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='78 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='08 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='61 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='00 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='16 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='23 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='14 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='09 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='63 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='51 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='68 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='02 iACE(Robert abase ) 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='83 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='63 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='11 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='35 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='41 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='65 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='72 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='38 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='81 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='66 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='76 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='10146 3 Multimodal architectures well when exposed to just few training examples has been steadily gaining importance as evaluation metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This was the so called few-shot learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Moreover, Transformer-based models have “universal vocation”: they tend to be multimodal and multi-task, encompassing vision, language and vision and language tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This idea might be appealing because humans learn by being exposed to a multitude of different inputs and tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But as we have seen, pure language models such as BERT tend to still outperform multimodal multi-task models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' There is definitely room for improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One might wonder whether the grounding of words in images is the right way to seek a better representation of words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Well, humans learn using all five senses and maybe the answer might be to incorporate in the models more heterogeneous perceptual information: not only static images but also videos, speech and the like.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The debate is still open: the story goes on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Last but not least, a mention needs to be made on concrete applications of these image-empowered word-embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The use of images to support lin- guistic models has been experimented in several fields, from Dialogue Response Generation (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Sun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021)) to Machine Translation, where for example Ive et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019) found images to improve the quality of translation when the textual context is generic and/or ambiguous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The number of potential applications of the models described in this subchapter is growing steadily in the scientific community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But this is yet another story.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 Appendix: Selected Models - Summary A table (available here) contains a summary of selected language models augmented with visual components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For each model, the following information are reported: Pure language model and pretraining data Visual features and pretraining data Fusion strategy of the two modalities Benchmarks/baselines for evaluation 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Text supporting Vision Models Author: Max Schneider Supervisor: Jann Goschenhofer 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Text supporting Vision Models 147 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Introduction “The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' [.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ] Most AI re- search has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, mas- sively more computation inevitably becomes available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Seeking an improvement that makes a difference in the shorter term, re- searchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' [.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ] One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great.” — Sutton (2019) This insight seems to directly inspire most model choices presented in this chapter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Each network can be seen as an attempt of its creators to employ their vast available resources on a large scale, with a particular focus on dataset sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This mostly becomes feasible through the adaptation of recent findings in natural language processing (NLP;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' see chapter 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1) to computer vision (CV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On the one hand, architectural concepts firstly popularized in NLP are translated to CV (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', self-supervised learning or the Vision Transformer;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020b) (see chapter 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On the other hand, these powerful new NLP models, mostly Transformers (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017b), support bigger models from the inside as text encoding building blocks;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' hence the name of this chapter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Throughout this chapter, we will introduce recent relevant CV models CLIP (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a), ALIGN (Jia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021b) and Florence (Yuan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) and discuss their underlying core concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The strong performances confirm the potential, hinted at by the impressive GPT-3 (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020), of improving CV and increasing scale with the help of NLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 148 3 Multimodal architectures 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Concepts 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Web-scale data A core problem that troubles researchers is the lack of robustness of previous state-of-the-art CV models to distribution shifts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', when a model with good performance on its original dataset fails to generalize (transfer its knowledge) to new, more or less similar datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021a) report that a ResNet101 which they trained on ImageNet to an accuracy of 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2% maintains only an accuracy of 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6% on ObjectNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This suggests that the model perhaps did not learn high quality latent representations, but instead overfit to the dataset-specific data-generating distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A common way to tackle this would be to try out various changes on the architecture and the training algorithm of the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But this kind of adaptation, inscribing expert knowledge into the model, seems to repeat the mistake pointed out by Sutton (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' “micromanaging” a model is likely to thwart future scaling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The researchers of CLIP, ALIGN and Florence follow a different approach, based on scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They try to increase sample size as much as possible and work with tremendous numbers of training observations: 400 million (CLIP;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a) 900 million (Florence;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Yuan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 billion (ALIGN;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Jia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021b) These large-scale dataset are generated using the vast amount of image-text pairs produced by and readily available on the internet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Thus, error prone, cost and labor intensive (difficult to scale), manual labeling is avoided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Un- fortunately, the models trained on web data also become vulnerable to their downsides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Because of their extremely noisy nature, still some form of pre- processing is needed, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', filtering for English language, excluding graphic content and, optionally, removing images with non-informative alt-texts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This makes some degree of dataset curation, and therefore arbitrary choices, neces- sary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Likewise, the social biases inherent to the internet are reproduced and furthermore, while this approach improves data efficiency to some degree (see next subsection 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2), the poor performance of deep learning in this area is not substantially enhanced and mainly just compensated for with a super scalable source of supervision (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Contrastive objective This source of supervision is the information contained in the co-occurrence of the image with its alt-text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is accessed through natural language super- vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The architectures jointly train two sub-networks for image and text encoding, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' During this, the vector encodings are aligned in the latent representation space through minimizing a variant of the contrastive loss function (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='10) (Tian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Half of the first image-text pair loss 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Text supporting Vision Models 149 ℓVimg,Vtxt 1 = − E {v1 img,v1 txt,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=',vN txt} � log hθ({v1 img, v1 txt}) hθ({v1 img, v1 txt}) + �N k=2 hθ({v1 img, vk txt}) � , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='10) where v1 img and v1 txt are vector encodings (latent representations) of image 1 and text 1 and hθ(·) is a similarity measure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In order to guarantee symmetry, the total loss is formed by the sum of ℓVimg,Vtxt 1 and ℓVtxt,Vimg 1 , where the pairwise similarities of one text and every image is calculated instead of the other way around.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='52 visualizes this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Initially all images and texts in the training data are encoded by the responsible sub-network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Using the resulting encodings, a similarity matrix with elements hθ({vi img, vj txt}) can be calculated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Loosely speaking, the contrastive objective is to maximize elements on the diagonal and minimize the others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='52: Visualization of a contrastive objective (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' After encoding the data, a similarity matrix for the images and texts is computed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The aim is that the N true image-text pairs score high in terms of similarity, while the N2 − N other possible combinations score low.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Contrastive learning can be contrasted with classical predictive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='53 gives an interesting insight into the choice of space, where goodness of fit is measured.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The exemplary task is to color an image given its B/W version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Approach (a) first encodes the B/W image and then decodes the interim latent representation to fitting colors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The goodness of this fit is measured in the output space, meaning the estimated colors are compared to the true colors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' pepperthe Text aussie pup Encoder T T3 TN I1 I,·T, I,·T2 I,·T3 I,·TN 12 I2·T, I2·T2 I2·T3 I2·TN Image 13 I3·T, I3·T2 I3·T3 I3·TN Encoder : : .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' : : N In·T, In·T2 In·T3 IN·TN150 3 Multimodal architectures Conversely, approach (b) measures the loss in the representation space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 A reason for the good performance of contrastive learning could be that, while common prediction losses (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', the L2 loss) penalize each prediction output dimension independently, approach (b) implies measurement in the intertwined representation space (Tian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='53: Predictive vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' contrastive learning: Predictive losses are measured in the output space while contrastive losses are measured is in the representation space, indicated by red dotted boxes (Tian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' But in the end, rather than theoretical considerations, the driving factor for using this objective is data efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As can be seen in figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='54, Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021a) start their search for an adequate pre-trained model (more on this in subsection 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3) by experimenting with a Transformer-based language model predicting the exact captions of an image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It turns out that this approach trains three times slower, in terms of data efficiency, compared to a simpler baseline of predicting a bag-of-words text encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Additionally, switching to the contrastive objective of CLIP improves data efficiency by a factor of four.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Nonetheless, the switch to contrastive learning leads to some limitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Its rigidity demands certain extra steps and forfeits the high flexibility of generative models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In particular, this means contrastive models similar to CLIP are limited to choose from available options and cannot freely generate texts or images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To extend the capabilities of those models additional network building blocks are necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 6Note that contrastive learning easily works with other combinations of modalities than text and image;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' here B/W and colors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' V1 2 f g (a) Predictive learning V1 V2 2 fe1 fe2 (b) Contrastive learning3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Text supporting Vision Models 151 FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='54: Data efficiency of contrastive objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Development of zero-shot accuracy (see next subsection 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3) on ImageNet with increasing number of instances of training data processed by the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The contrastive objective reaches similar accuracy scores as the generative approach with only a seventh of the amount of data (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Foundation models and zero-shooting The first models which are considered foundation models today began to appear in NLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The term, later coined by Bommasani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021), refers to models that are noteworthy due to their large scale and ability to adapt to a wide variety of downstream tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An early example is BERT (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2018b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Often, foundation models have an unfinished touch to them and the true scope of their capabilities cannot be sketched out clearly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This generally is the case because the desired abilities of neural networks are not designed for explicitly, but rather emerge during their implementation and usage on downstream tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Bommasani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021) cite GPT-3’s ability to perform certain types of new tasks solely by confronting it with the right natural language prompt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', it is possible to get GPT-3 to summarize a paragraph by appending “TL;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='DR” (too long, didn’t read) to the prompt, which is a common pattern on the internet to signal a following summery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is referred to as “in-context learning” (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is apparent that one can make up plenty of unexpected ways to employ these models and it remains unknown whether there is a further way no one thought of yet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This means possibly saving computational and data collection costs down the line, which ineptly is true for malicious use cases, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', surveillance, too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Foundation models build on the concept of transfer-learning, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', pre-training a model on a feasible source task and applying it to the desired downstream task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the context of this chapter this means pre-training on web-scale data (see subsection 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1) and evaluating performance on various common classification datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021a) name the SVHN dataset 40 Accuracy 35 30 25 20 4x efficiency 3x efficiency 15 10 Bag of Words Contrastive (CLIP) 5 Bag of Words Prediction Transformer Language Model 0 2M 33M 67M 134M 268M 400M # of images processed152 3 Multimodal architectures as a proxy for the task “street number transcription” with the caveat “on the distribution of Google Street View photos”, but they remark that a lot of datasets have no obvious, specific task associated, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', CIFAR-10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They use these kind of datasets for measuring the “robustness to distribution shift and domain generation” of their model, which still is a topic of great interest as mentioned in subsection 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' When there is no further fine-tuning on the downstream task, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', no resuming of training on the new target dataset, this is referred to as zero-shooting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Zero-shooting has the clear advantage of evaluating performance more unbiased, as processes like overfitting to the data-generating distribution will not distort results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='55 shows how contrastive models perform zero-shot transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the case of image classification all available classes are encoded by the language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Afterwards, the CV sub-network computes the encoding of the image to be classified and all pair-wise similarity scores are returned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The pair with the best score can be retrieved as the decision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Image retrieval works the other way around: After an initial encoding of all images, the ones most similar to the encoded natural language text prompt in the representation space can be returned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='55: Visualization of zero-shooting (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' plane car a photo of Text dog a object}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Encoder bird T1 T2 T3 TN Image I, ·T, I, ·T2 I,·T3 I, TN Encoder a photo of adog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Text supporting Vision Models 153 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Architectures 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 CLIP The first of the large scale contrastive CV models that were published is CLIP, short for Contrastive Language-Image Pre-training (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The components of its name are explained in previous subsections 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 and are the crucial concepts of ALIGN and Florence as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' CLIP is a product of OpenAI, but its code is freely available and the different versions can be accessed as python modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The dataset used for training is not released though.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A lot of preliminary work stems from Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020b), who introduced con- trastive representation learning using image-text pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Their implementation of the contrastive loss function (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='10) follows ℓVimg,Vtxt 1 = − log exp(⟨v1 img, v1 txt⟩/τ) �N k=1 exp(⟨v1 img, vk txt⟩/τ) , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='11) where ⟨v1 img, v1 txt⟩ represents the cosine similarity, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', v1⊤ imgv1 txt/(∥v1 img∥∥v1 txt∥), and τ ∈ R+ is a temperature parameter, which is directly learned during training (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' CLIP adopts this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ℓVtxt,Vimg 1 , the counterpart to ℓVimg,Vtxt 1 for the total loss, is function (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='11) with switched arguments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This can be viewed as a symmetric cross entropy loss over the cosine similarity of the embeddings (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Architecture The text encoder for CLIP (see figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='53) is a modified Transformer (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017b), which was also used for GPT-2 (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For the image encoder multiple sub-networks are evaluated: ResNets: ResNet-50, ResNet-101 ResNets which follow EfficientNet-style model scaling: RN50x4, RN50x16, RN50x64 Vision Transformers: ViT-B/32, ViT-B/16, ViT-L/14 The best performing sub-network was the ViT-L/14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In turn, they trained it for an additional epoch with higher resolution images (336px), denoting this version ViT-L/14@336px.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' If not indicated otherwise, the performances of this version of CLIP are displayed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The EfficientNet-style ResNets use x4, x16 and x64 of the compute of a ResNet-50 and the largest model (the RN50x64) trained for 18 days on 592 V100 GPUs, while the ViT-L/14 only took 12 days on 256 GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The high parallelization capabilities of Transformers seem to pay off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' When explaining zero-shooting initially (see subsection 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3), a text pro- cessing step was skipped.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As can be seen in figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='55, there is an additional 154 3 Multimodal architectures operation before the labels are fed into the text encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In order to help the model understand the context of the words, the class labels are embedded in a sentence, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', “A photo of a {label}.”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This increases the models zero-shot accuracy on ImageNet by 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 percentage points (pp).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' When ensembling 80 different context prompts7 Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021a) improve ImageNet accuracy by an additional 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5pp, which adds up to a total of nearly 5pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The average performance gain across 36 datasets is reported to be 5pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is similarly possi- ble to directly communicate visual concepts like “picture”, “macro”, “drawing” or even “dog” to the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Robustness Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='56 illustrates the performance of CLIP and a ResNet101, whose training on ImageNet was stopped at the point it reached the same accuracy as zero-shot CLIP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It can be deduced that the methods studied in the paper of Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021a) constitute an important step towards closing the robustness gap mentioned earlier (see subsection 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' While the perfor- mance of the ResNet101 deteriorates with datasets generated from more and more different data distributions compared to ImageNet, CLIP remains fairly accurate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Note that these findings have to be taken with a grain of salt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Because OpenAI does not grant public access to their training data, independent parties cannot investigate these claims on their own.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', it has to be relied on the conclusions of their overlap analysis to rule out that CLIP has not seen biasing amounts of future test data during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='56: Robustness of zero-shot CLIP to distribution shifts (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 7Prompts like: “A photo of a big {label}.”, “A photo of a small {label}.” (Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a) mageNet Zero-Shot DatasetExamples ResNet101 CLIP △Score ImageNet 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 0% ImageNetV2 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 +5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8% ImageNet-R 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 +51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2% ObjectNet 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 +39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7% ImageNet 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 +35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0% Sketch ImageNet-A 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 +74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4%3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Text supporting Vision Models 155 CLIP as a building block Shen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021) study how the performance of Vision-and-Language (V&L) models improves, when the visual encoder is switched to CLIP’s strong image encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They discover that in this field of CV the ViT-B scores significantly worse than the ResNets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', tests on image captioning reveal that the V&L model using ViT-B often performs only half as strong as the version using the RN50x4 (the largest network used in this study).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is possibly due to the pooling strategies of ViT-B, which result in a lack of visual localization abilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Shen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021) test their hypothesis and generate, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='57 which depicts Grad-CAM Visualizations for a V&L model with a ViT-B backbone and a ResNet-50 backbone and the question “What color is the woman’s shirt on the left?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The red area indicates relevant pixels and appears much more focused for CLIP-Res50 than for CLIP-ViT-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='57: Grad-CAM Visualizations for the prompt “What color is the woman’s shirt on the left?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 ALIGN The approach of Jia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021b) is largely similar to CLIP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They reiterate the necessity of large-scale vision datasets, but assert that even CLIP’s data collection process still involves a non-trivial amount of data curation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They propose that the amount of additional observations obtained through minimiz- ing the amount of filtering makes up for the increased noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Following this rationale, they create a training dataset with 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 billion image-text pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The corresponding model is named ALIGN, short for “A Large-scale ImaGe and Noisy-text embedding”, whose acronym hints at the contrastive loss, which aligns vector encodings in the representation space (see subsection 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Architecture ALIGN follows the dual encoder architecture employed by Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2020b) and Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021a), but uses a part of BERT-Large as the text and EfficientNet-L2 as the image encoder, which they jointly train from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The model has around 800 million parameters (Alford, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Subsection 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 goes into more detail about the performance of ALIGN and compares all three models discussed in this subsection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' a) Original (b) CLIP-ViT-B (C) CLIP-Res50156 3 Multimodal architectures Connecting image and text representations The contrastive loss function aligns the latent representations of the different modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In other words, the explicit objective is that similar vector encod- ings implicate similar inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This means arithmetic operations like the ones mentioned in chapter 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 are not only meaningful on encodings belonging to the same modality, but to different modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', one can add up the image encoding of a picture of the Eiffel tower and the text encoding of the word “snow” and retrieve pictures with high cosine similarity as a result, see figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='58 for an illustration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='58: Multimodal image retrieval via arithmetic operations on word and image embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Florence While in principle the approach of Yuan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021) does not largely differ from the others, the focus of this paper is more about creating a true foun- dation model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In order to achieve this, they propose a map of possible vision applications which the try to cover via extending the core model with modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='59 depicts, they want to advance into the dimensions of fine-grained object detection, dynamic action recognition and true multimodal tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Due to their big ambitions, they name their model Florence after “the birthplace of Renaissance” (Yuan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Architecture As the two encoders for the pre-trained core they use a hierarchical Vision Transformer (CoSwin Transformer) for images and a Transformer similar to 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Text supporting Vision Models 157 FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='59: Florence’ approach to foundation models: A general purpose vision system for all tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' CLIP’s for text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Their 893 million parameters are also jointly trained from scratch on 900 million image-text pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The alignment happens in the so called image-label-description space which is encoded through a special version of the contrastive loss function which regards all image-text pairs with the same label as positive instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='60 depicts their version of figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='52 where one can schematically see how they flexibly add modules to the pre-trained core in order to adapt to various downstream tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='60: Modular architecture of Florence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 Performance comparison Throughout the papers of Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021a), Jia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021b) and Yuan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021) we were able to collect three tables with reported performance measures to compare these approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Modality Multi-sense Whatare they talking Agroupofwomensitting about?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' around atable Caption Depth Video Reasoning Visual (only) Static Dynamic How many red buttons?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='VisualQuestion ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Time ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Answering ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Coarse ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Flower ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Playing Soccer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Classification ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='ActionRecognition ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Eagle ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Fine-grained ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='tagle ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Space ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='ObjectDetection ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Segmentation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='ObjectTrackingFlorence (Vision Foundation Model) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Florence Pretrained Models ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Florence Adaptation Models ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Retrieval ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='oat over heads ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Language Encoder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='on a docl ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Classification/Retrieval Adaptation log ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Clas sific ation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Unified Vision Stack ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Text ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Object-level Representation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Unified Contrastive Learning ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='(Dynamic Head Adaptor) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Object Detection ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Image-Text Dataset ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Image Encoder (CoSwin) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Fine-grained V+L Representation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='by Data Curation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='(METER Adaptor) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='from Internet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='VQA ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='VideoRepresentation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Image ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='(Video CoSwin) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Action Recognition ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Tasks ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Scalable Training Infrastructure ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Deployment158 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Multimodal architectures ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='61 summarizes the zero-shot accuracies on four different ImageNet variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Unfortunately Yuan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021) only stated their performance on the original ImageNet, where they beat CLIP and ALIGN by a margin of 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The results on the other three ImageNet pendants are mixed and there is no clear winner between CLIP and ALIGN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='61: Top-1 Accuracy of zero-shot transfer of models to image classification on ImageNet and its variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='62 concerns zero-shot image retrieval on the Flickr30K and the MSCOCO dataset (see chapter 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Even though there are not many major score differences, there is a clear ranking with CLIP on third, ALIGN on second and Florence on the first place.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='62: Zero-shot image and text retrieval (Yuan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The most comprehensive comparison is shown in table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It depicts the accuracy of zero-shot CLIP and Florence on various datasets as well as the scores of all three models fine tuned to the respective datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Florence beats CLIP in nearly all evaluations, for the zero-shot setting as well as for fine tuned performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Jia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021b) only report on four of these twelve datasets, where they win half of the time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Summing up, ALIGN achieves its goal of replicating CLIP’s impressive per- formance while dramatically reducing the required data curation effort and Florence has the overall top performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This could be attributed to its custom loss, maybe to Yuan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021) striking the best balance between sample size and data curation or to Florence having the best sub-networks;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' or a combination of all three.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Once again note that none of the training datasets were made publicly available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It cannot be guaranteed that all benchmarks were evaluated on unseen datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Flickr30K (1K test set) MSCOCO (5K test set) Image→Text Text→lmage Image→Text Text→Image R@1 R@5 R@1 R@5 R@1 R@5 R@1 R@5 CLIP 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 ALIGN 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 Florence 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4ImageNet ImageNet-R ImageNet-A ImageNet-V2 CLIP 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 ALIGN 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Florence 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 Models for both modalities 159 FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='63: Top-1 Accuracy of CLIP, Florence and ALIGN on various datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 Resources One can access the pre-trained CLIP models on Github and they even found their way into simple command line tools already.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For example there is a CLI named rclip, which can be used for personal image retrieval, wrapping the ViT-B/32 CLIP architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On a (mid-range, regular) laptop, we were able to find seemingly good matches for search terms which we tried out inside a folder containing about 100 different pictures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' After an initial caching one request took about ten seconds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Furthermore CLIP continues to be used inside new models, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', DALL·E 2, where it is used for the image embedding (Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Also, there is a crowd-sourcing effort to replicate CLIP’s training dataset called LAION-400M (Schuhmann, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To validate the image-text pairs collected for this, their cosine similarity is computed using CLIP and instances with a value too low are discarded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To our knowledge no resources were open-sourced as part of the other two papers ALIGN and FLORENCE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 Models for both modalities Author: Steffen Jauch-Walser Supervisor: Daniel Schalk Data is naturally at the heart of every data scientific issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' While there have been many advances made in machine learning in recent years, many promising research areas remain, as do a multitude of problems associated with them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' One such promising area are multi-modal machine learning models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Combining different input data is a key aspect towards making models more sophisticated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' When thinking about teaching robots specific tasks, detecting hateful memes or deep fakes, it is apparent that only through the combination of multiple modalities, success might be achieved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Context is key.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, learning context requires increasingly complex models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' While early Cars Aircraft Pets CIFAR100 Caltech101 lowers102 CIFAR10 DC2007 ImageNet Food101 SUN397 Stanford Oxford CV TD & D D CLIP 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Florence 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 CLIP (fine tuned) 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 ALIGN (fine tuned) 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 Florence (fine tuned) 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1160 3 Multimodal architectures machine learning models built their success upon the possibility to analyze the big pool of available, often unstructured data, modern machine learning models are so demanding that there is often not enough data or training time available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Obtaining data is a major issue for multi-modal machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Since labelling data in vast amounts is prohibitively expensive, larger models have to come up with specific strategies to move forward such as self-supervised training or automatically scraped web datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Nevertheless, when models become so large that billions of parameters have to be learned, even scraping the whole web starts to show its limits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another natural issue is the transformation of different types of data into usable model inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' There is no shortage of different single modality machine learning models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On the contrary, when every new hyperparameter configuration might be seen a new model, it becomes hard to keep track.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' More importantly, it is often not clear how a model from one area transfers to another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Did we learn some modality specific bias or a general principle?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Consolidating different models into a unifying framework is a key prospect of multimodal machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' While the grand dream of a single unifying model might be out of reach, consolidating different areas is well in sight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the following, we will have a look at the challenges and prospects of multimodal machine learning against the background of visual language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Visual Language Models are models which can deal with both language and images as input data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Specifically, we will have a closer look at three different models: Data2vec, VilBert and Flamingo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Data2vec is an unsupervised model that can handle different modalities, but not their interaction, using a single unifying training framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VilBert is an early visual-language model that can handle interactions between images and text through its innovative concept of cross-attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Flamingo is a recent few shot visual language model that features large expressive text capabilities through the use of a large language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' With 80B parameters, it particularly highlights how to leverage the communication between frozen models when further scaling up the model size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An overview across the popularity of current research fields in visual language modelling is provided in figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A detailed list of trends for each of those fields can be found in Uppal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Most research is done in the areas of visual question answering (VQA) and visual captioning (VC), but also for example visual commonsense reasoning (VCR), vision-language navigation (VLN) or multimodal affective computing (MAC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' MAC uses images and text to infer sentiment, for example through facial expressions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VCR as an extension of VQA is particularly interesting in the realm of making models more interpretable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' After all, we would like to know why machine learning models do what they do.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Finally, VLN has many promising practical applications in the field of robotics, particularly the interaction of humans and robots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 Models for both modalities 161 FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='64: Uppal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022): VisLang Paper Trends (previous 2 years) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 Data2vec With data2vec (Baevski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022), data scientists at Meta, formerly Face- book, developed an architecture that addresses some of the mentioned issues while highlighting the importance of sophisticated training schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Their algorithmic structure is able to work with either text, image or speech data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On top of that, the model is self-supervised based on a teacher-student relationship which reduces the need for human labelling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is not a universal model in the sense that it works with any input, nor is it even a general model in the sense that the algorithm is exactly the same for each modality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, the overall model structure remains the same for either text, speech or image input data, while only the specific encoding, normalization and masking strategies are modality-specific.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In that regard, it is a step towards a more general way of dealing with different modalities and it is very effective at doing so given the benchmark results on typical data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Particularly noteworthy is also the way they implement the self-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Data2vec predicts contextualized and continuous representations rather than typically used discrete tokens such as sub-words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Working with latent representations of the input space has two advantages: not only is the number of prediction targets not a-priori limited, but they are also richer in information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='65 depicts the general model architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The two main components are a teacher and a student model which only differ in one aspect, the weights of the teacher model are an exponentially decaying average of the student’s weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The purpose of the teacher model is to create training targets for the student model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In a first step, a modality is chosen and inputs are encoded according to the specific encoding scheme for that modality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A masked version Trends in VisLang Research VCR 10% VQA MAC 25% 8% VG 7% VLN 10% MMT VC 6% 31% VR 4%162 3 Multimodal architectures FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='65: Baevski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022): Data2vec Architecture - a teacher model creates contextualized latent targets on the basis of its top K layers (blue) as prediction task to train the student model is given to the student model, but notably, the teacher model has access to an unmasked, complete view of the input data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Hence, the resulting training targets will be fully contextualized using a self-attention mechanism over the whole input data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The training targets are based on the top K layers of the teacher model depicted in blue in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' More specifically, denoted by yt, the training target at time t and by al t the outputs of the l-th block, then yt = 1 K �L l=L−K+1 ˆal t, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the training targets are the average of the outputs of the top K layers of the teacher network after a normalization has been applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Normalization helps to stabilize the training process and prevent model collapse which can be an issue with models that learn their own representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' From the authors point of view, working with a latent representation of the actual learner as training target is a simplification of many commonly used modality-specific designs despite the caveat that this paper still uses modality-specific encoding strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Compared to other models, there is no cross-modality training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The specific loss function used to regress the targets is a smooth L1 loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' L(yt, ft(x)) = � (yt−ft(x))2 β if |(yt − ft(x))| ≤ β |(yt − ft(x)| − β 2 otherwise Using a smooth L1 loss has the advantage of being continuous, yet sensitive to outliers, however the β parameter needs tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As far as the general model architecture is concerned, the underlying architecture is a standard transformer architecture (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' How does the modality specific input handling work?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In many ways, in this work the authors combine the strategies developed in multiple previous works and add a unifying framework on top of it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For images, the typical Vision Transformer (ViT) strategy (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='66) to transform images with a size of 224x224 pixels into 16x16 pixel patches is employed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Every patch is then linearly transformed into a sequence of 196 flattened Images Speech Language Model in teacher-mode Original I like tea with milk Teachertracks student Predict model Model in student-mode parameters representation of original input Masked I like tea milk3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 Models for both modalities 163 representations including a learn-able positional encoding that serve as input to the vision transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A classification token is used to produce the final categorization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The contextualization is produced in the multi-head attention blocks as explained in earlier chapters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In short, multi-head attention first projects the keys, queries and values with learned linear projections which are then evaluated in parallel to create more expressive attention maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Attention itself is calculated as scaled dot-product-attention using a softmax over the scaled product of keys, queries and values (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As far as the vision transformer itself is concerned, datav2vec tests two different model sizes, a base model size of 12 and a large model of 24 transformer blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The masking strategy for images follows the Bert pre-training approach of image transformers, BEiT, proposed by Bao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In particular, multiple adjacent blocks are being masked with random aspect ratio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The minimum size of a masked block is 16 patches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In total, 60% of patches were masked in the data2vec algorithm, which is an increase over the original 40% used by BEiT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, the authors note that they found increased masking to be more accurate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The augmentation strategies are similar, as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Resizing crops, horizontal flipping and colour jittering were used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Naturally, the student and teacher model are the given the same modified image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Finally, for image data, the model is measured on a classification task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Hence, the authors use a mean-pooling over all patches in the last transformer block and input that into a softmax-normalized projection that conducts the classification, which is again based on the BEiT model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='66: Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2021) The natural language processing model is implemented with a PyTorch toolkit named fairseq and based on the RoBERTa (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019b) architecture which redesigned the standard Bert model training procedure to make it more Vision Transformer (ViT) Transformer Encoder Class Lx Bird MLP Ball Head Car MLP Norm Transformer Encoder Patch + Position Multi-Head Embedding Attention Extra learnable [class] embedding Linear Projection of Flattened Patches Norm Embedded Patches164 3 Multimodal architectures robust and effective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In particular, it increases hyperparameters such as the learning rate and the batch size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It also removes the next sentence prediction task to improve on the masked language modelling performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this case they follow Sennrich et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2015b) and encode sub-words as 50k byte-pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A separate embedding vector is learned for each type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For the masking, the Bert masking is being used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 15% of the embedded tokens are replaced, thereof 80 percent are learned masks, 10% are unchanged and the remaining 10% are replaced with random tokens in the vocabulary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another strategy that the authors also consider is the wave2vec masking strategy’ to mask four consecutive tokens with a probability of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='35 while only using learned tokens (Baevski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As it turns out, the later strategy further improves the results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The natural language processing model is evaluated on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2018) which includes for example includes NLP inference, sentence similarity and sentiment analysis tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The speech category is also implemented in fairseq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The feature encoder for speech is based on the wave2vec framework and uses 16 kHz inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It is built upon seven temporal convolutions intertwined with normalization layers and a GELU activation function such that the output of the encoder is 50 kHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As far as the results are concerend, data2vec achieved state-of-the-art perfor- mance in vision and language tasks among similar self-supervised models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='67: Baevski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022): data2vec performance (vision) Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='67 shows the model’s performance in computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Pre-trained and fine-tuned simply on the data of the well known ImageNet-1K dataset, data2vec was evaluated using top1-accuracy, the standard notion of accuracy, on the task to predict single labels for images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The base model ViT-B comprises 86M parameters and ViT-L 307M parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The results show that predicting contextualized latent representations in a masked prediction setup can work well as model training compared to classical local methods such as predicting visual tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' MoCov3 (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) is a self-supervised model trained ViT-B ViT-L Multiple models BEiT (Bao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='. 2021) 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 PeCo (Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022) 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 Single models MoCo v3 (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021b) 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 DINO (Caron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 MAE (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 SimMIM (Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 iBOT (Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 MaskFeat (Wei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021) 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 data2vec 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 Models for both modalities 165 on a contrastive loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The most similar model is DINO (Caron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021), which also uses a self-distillation setup to predict teacher outputs using a cross- entropy loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, their prediction target was the final layer rather than averaged layers while using differing images for teacher and student network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The well performing MAE model (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022) is a masked autoencoder which is trained on reconstructing masked pixels using an asymmetric encoder- decoder architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In contrast, MaskFeat (Wei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022) uses masked feature prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Notably, data2vec outperforms all of them although trained for the same amount or less.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Particularly, MAE and MaskFeat use 1600 epochs rather than 800 like data2vec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='68: (res:data2vecresults2) (res:data2vecresults2) Baevski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022): data2vec results (language) Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='68 shows the performance in the language domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For the language domain, the model is evaluated on the GLUE benchmark (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The model is pre-trained and fine-tuned separately on the labelled data from each task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Accuracy is reported as the average across 5 tuning cycles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' While data2vec achieves a higher average performance than the baseline model, there are tasks where the baseline model prevails.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A large portion of the performance difference seems to be driven by the CoLA task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The Corpus of Linguistic Acceptability (CoLA) consists of 10657 sentences from 23 linguistics publications and the task is to judge whether they are grammatically correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Hence, it is distinctly different from the other tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The Stanford Sentiment Treebank (SST) analyzes sentiment in language through movie reviews.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The Multi-Genre Natural Language Inference (MultiNLI) corpus contains sentence Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Natural language processing: GLUE results on the development set for single-task fine-tuning of individual models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For MNLI we report accuracy on both the matched and unmatched dev sets, for MRPC and QQP, we report the unweighted average of accuracy and F1, for STS-B the unweighted average of Pearson and Spearman correlation, for CoLA we report Matthews correlation and for all other tasks we report accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' BERT Base results are from Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (202O) and our baseline is RoBERTa re-trained in a similar setup as BERT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' We also report results with wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 style masking of spans of four BPE tokens with no unmasked tokens or random targets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' MNLI IINO RTE MRPC QQP STS-B CoLA SST Avg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' BERT (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019) 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0/84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 Baseline (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019) 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1/83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 data2vec 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2/83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 + wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 masking 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8/83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='7 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='9 40 Top-1 valid accuracy Word error rate GLUE score 80 84 30 70 82 20 60 80 67 9 101112 12345 6 9 10 11 12 8 9 10 11 12 K K K (a) Speech (b) NLP (c) Vision Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Predicting targets which are the average of multiple layers is more robust than predicting only the top most layer (K = 1) for most modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' We show the performance of predicting the average of K teacher layer representations (s3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The effect is very pronounced for speech and NLP while for vision there is still a slight advantage of predicting more than a single layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='166 3 Multimodal architectures pairs and focusses on textual entailment across genres.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Similar tasks are used in the Recognizing Textual Entailment (RTE) dataset which focuses on text from news and Wikipedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset that contains answers from Wikipedia to corresponding questions posed by an annotator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The task for the model is to find out whether the sentence contains the answer to the question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' QQP stands for Quora Question Pairs, which analyzes paraphrases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Finally, the Microsoft Research Paraphrase Corpus (MRPC) also consists of sentence pairs from newswires which may or may not be paraphrases of each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As a suitable baseline model, the authors retrain RoBERTa in the respective setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On top of the heterogeneous performance across language tasks, the evaluation also clearly shows that averaging over multiple layers to create prediction targets improves performance across all three domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The effects seem to be most pronounced on NLP tasks whereas CV does not benefit from averaging more than three layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the speech domain, six layers seems to be enough to reach peak performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In any case, performance loss while following the strategy to simply average the maximum amount of layers, rather than fine-tuning K, seems small enough to be potentially acceptable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To sum it up, data2vec is a self-supervised model that can work with either text, speech or image data, but not across modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It aims at unifying the learning framework through a teacher-student-setup that allows for contextualized latent target prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The teacher model is based on a complete view of the input data, which introduces contextualization, while the student model only sees a masked version of the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Compared to previous work, the authors average the top K layers rather than only the final layer of the model, which has a notable effect as shown in 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As there are different layers in the transformer network, the authors also investigate which layers work best for prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' They conclude that the output of the feedforward layer works best.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Built on a transformer architecture, self-attention is the main driver that creates contextualized targets in the teacher model and hence performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The authors also show that contextualization through the teacher model works best with the complete view of the input rather than a partial view.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On top of not being able to work across modalities, one drawback is that the model’s structure still uses modality specific encoding and masking schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In that regard, the perceiver architecture (Jaegle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a) for example used in the Flamingo model is a complementary approach worth exploring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An earlier model that works across modalities is VilBert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 Vision-and-Language Bert (VilBert) As seen in the previous section, data2vec can handle text, image or speech as input data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, it cannot do so at the same time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The model’s focus is on unifying the training approach rather than working across modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, when we think about multimodal models, we usually think of working with 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 Models for both modalities 167 different modalities at the same time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VilBert (Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019b) is a natural extension of the iconic Bert architecture (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2018c) to vision-and- language modelling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An immediate question is whether vision and language inputs should be handled together in a single stream or in parallel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As we will see, it turns out that encoding inputs in parallel and working with parallel streams increases performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At heart of that architecture is a co-attention mechanism which enables information exchange between both modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='69: Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019b): VilBert’s Dual Stream Architecture: dashed transformer modules can be repeated, co-attention modules allow sparse inter- action between modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='69 shows the employed parallel stream architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Each modality is handled separately and fed into two Bert-style transformer models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This allows for both modalities to be handled according to their respective needs while co-attention layers allow for communication between the streams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For the language stream, the encoding uses the vocabulary plus a special classification token (cls), a sentence separation token (sep) and a masking token (mask).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For the vision stream, image region features are extracted via a Faster R-CNN (Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2015) model which was pre-trained on the Visual Genome Dataset (Krishna et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Since image regions lack a natural ordering, its spatial location has to be encoded, as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VilBert achieves that through a five dimensional vector that encapsulates the image coordinates and the fraction of the covered image area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Through projection, the dimensions of the positional encoding and visual features are matched and then summed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The image token marks the beginning of such an image region sequence while representing the whole image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Through the dual stream architecture, the complexity of the model can be adjusted separately for each modality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An alternative approach would have to discretize the visual space via clustering and then use the resulting tokens in the same way as text tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The drawbacks of that approach are the potential loss of detail at the discretization stage and the loss of flexibility across modalities as a result of the same processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Finally, a single stream architecture can interfere with the pre-training of the language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The model will have to be fine-tuned based on the created visual tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As those might be very different from the text tokens, there is potential for the pre-trained language model to be become ‘damaged’ in the process and lose capabilities - and idea that is also central to the Flamingo model presented later on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Vo V1 V2 V3 VT Embed Co-TRM TRM hvo, hv1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='., hvT Man shopping for fruit Embed TRM Co-TRM TRM hwo, hw,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=',hw Wo W1 W2 W3 W4 WT L-k × k×168 3 Multimodal architectures FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='70: Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019b): Cross-Attention in VilBert The key innovation in the Vilbert paper (Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2019b) is the use of co-attention layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='70, the basic architecture is depicted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The co-attention module computes query, key and value matrices in a standard transformer attention fashion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, it then feeds the keys and values from each modality into the other modalities multi-head-attention block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As a result, the visual attention will be conditioned on text whereas the language attention will be image-conditioned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This communication between streams only occurs at specific sections in the model, denoted by co-trm in figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Notably, the language stream features a lot more preprocessing before the first co-attention layer than the image stream.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An interesting question to ask is what is actually learned in those attention layers and how they correspond to human attention maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (Sikarwar and Kreiman, 2022) analyze the efficacy of co-attention layers for VQA tasks in a VilBert network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Specifically, they compute the question conditioned image attention scores and compare them to human attention maps created in experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In those experiments, humans are tasked with unblurring specific image regions to answer the same questions one would expect the machine learning model to answer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Such human attention maps are collected in the VQA-HAT dataset (Das et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Rank correlation is used to compare attention maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Sikarwar and Kreiman (2022) find that in a 6 layer network rank correlation plateaus at layer 4 and increases in the number of image regions proposed while encoding the images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Perhaps more surprisingly, they find a minimal influence of semantics on the generation of the attention maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Randomly shuffling words in a sentence when testing the model performance barely changes the attention output, which suggests that keywords rather than sentence structures drive the attention output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Note however that while attention maps remained similar,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' the model’s actual performance on answering ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='the questions dropped notably by approximately 15% such that it seems clear ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='H(l+1) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Add & Norm ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Add & Norm ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Add & Norm ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Feed Forward ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Feed Forward ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Feed Forward ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Add & Norm ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Add & Norm ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Add & Norm ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Multi-Head ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Multi-Head ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Multi-Head ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='K ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Visual ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Linguistic ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='H(l) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='(a) Standard encoder transformer block ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='(b) Our co-attention transformer layer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='Figure 2: We introduce a novel co-attention mechanism based on the transformer architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' By exchanging key-value pairs in multi-headed attention, this structure enables vision-attended language features to be incorporated into visual representations (and vice versa)3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 Models for both modalities 169 that coherent sentences are important for the overall VQA task, but not for the attention creation process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' What are the keyword that drive cross-attention in VilBert?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The evidence provided by the authors clearly shows that nouns are the most influential parts-of-speech when considering attention maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On top of that, prepositions can sometimes help identify spatial relations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' There is also some support for the hypothesis that removing Wh-words such as “who” and “where” can improve fine-grained attention maps in the final layer which might be worth exploring further as preprocessing for deeper networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Another approach would be to search for ways to improve the way attention maps are generated by finding ways to include more of the available sentence information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Most notably, however, using object-based region proposals to process images can lead to bottlenecks that can prevent the model from learning sufficiently fine-grained attention maps as shown in figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Overall, humans are naturally good at VQA tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Hence, it is not surprising that attention maps which correlate well with human attention maps also improve model performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='71: Sikarwar and Kreiman (2022): (Left to Right) Picture, Human Attention, 36 Regions, 72 Regions, 108 Regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Similarity between human and model attention is measured using rank correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='71 shows that the number of region proposals fed into the model after processing an image affects the ability of the model to produce adequate attention maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In this particular case the question “How many fingers is the girl in the black shirt holding up?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' was correctly answered by humans, as well as a VilBert model using 72 or 108 region proposals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It was answered incorrectly when using only 36 region proposals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Note however that in either case, the machine learning model captured the face of the wrong girl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The model using 72 regions also identified the wrong hand despite answering the question correctly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' While the 108 region model identifies the correct hand holding up the fingers, it does not seem to prioritize it over the other identified hands in the picture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Hence, the attention maps are sufficiently different from the human attention map which highlights the need to look closer not only at how models are performing, but also into how their performance has been achieved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As far as the model training is concerned, VilBert is pre-trained and fine-tuned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A: 2 A:1X A:2 A:2v p:0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='471 p:0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='564 p:0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0170 3 Multimodal architectures The pre-training tasks comprise masked-multi-modal modelling and multi- modal alignment prediction performed on the Conceptual Captions dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' That dataset contains about 3,1 million usable aligned image-caption pairs, which have been automatically scraped from web images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For the alignment task, the authors create unaligned images by randomly mismatching captions and images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For the masking task, 15% of the both the visual and language tokens are masked.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The task is to reconstruct the mask from the remaining input in a classical Bert fashion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' While the text masks are directly regressed like in Bert, the model predicts distributions over semantic classes for the image regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is achieved through minimizing the KL divergence, a measure for the similarity of distributions, between the output distribution of the pre-trained model used in feature extraction and the VilBert predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The performance results are depicted in figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='72: Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2019b): VilBert Performance As mentioned before, the dual stream architecture outperforms the single stream architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Furthermore, pre-training considerably boosts perfor- mance, as does fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Interestingly, the authors also study the effect of the size of the dataset and effect of the architecture depth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Performance increases monotonically with dataset size, suggesting that performance can be further improved with more data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The results on the optimal layer depth are task dependent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VQA and Image Retrieval reach peak performance at 6 layers, where a layer denotes a repeatable block as depicted in figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Zero Shot Image retrieval greatly benefits from even deeper depth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, the VCR and RefCOCO+ tasks seemingly benefit from shallower models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The VQA task is based on the VQA 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Each image must be matched to one of ten answers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Hence, the VQA task is not open-ended, but treated like a classification task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To achieve that, the model is amended by two MLP layers which use the element-wise product of the model-generated img and cls tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The VCR task is also posed as a multiple choice problem with images from movie scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To fine-tune for the task, questions and answers are concatenated into four different text input and given as model input together Table 1: Transfer task results for our ViLBERT model compared with existing state-of-the-art and sensible architectural ablations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' + indicates models without pretraining on Conceptual Captions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' For VCR and VQA which have private test sets, we report test results (in parentheses) only for our full model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Our full ViLBERT model outperforms task-specific state-of-the-art models across all tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' VQA VCR 25 RefCOCO+ [32] Image Retrieval [26 ZS Image Retrieval Method test-dev (test-std) Q→A QA→R Q→AR val testA testB R1 R5 R10 R1 R5 R10 DFAF B4 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='22 (70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='34) R2C 25 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8 (65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1) 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='2 (67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3) 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='1 (44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0) MAttNet 33 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='33 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='62 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='02 SCAN B5 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='60 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='70 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='20 Single-Streamt 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='90 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='15 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='89 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='27 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='64 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='02 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='04 Single-Stream 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='85 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='09 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='93 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='73 T 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='21 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='32 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='02 ViLBERTt 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='93 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='26 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='01 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='48 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='61 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='97 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='44 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='50 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='78 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='00 ViLBERT 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='55 (70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='92) 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='42 (73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3) 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='47 (74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='0) 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='04 (54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='8) 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='34 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='52 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='61 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='20 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='90 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='52 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='86 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='12 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='803.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 Models for both modalities 171 with the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In the end, four scores are generated accordingly and selected through softmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The RefCoCO+ task is a grounding task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' An image region has to be selected according to a natural language reference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Caption-Based Image Retrieval requires the model to find an image that corresponds to a selected caption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The dataset used is the Flickr30k dataset which contains 30 000 pictures with five captions that are of higher quality than the automatically generated captions from web data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='3 Flamingo The VilBert model showed one way how to actually combine visual and language inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In contrast, data2vec showed how to design an unsupervised model and how influential the actual training process as well as contextualization can be.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A natural question to ask is then is whether we can build a truly multimodal architecture like VilBert that is self-supervised like data2vec or at little task- specific training and how to optimized its training procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' In particular, both VilBert and data2vec were tested on multiple tasks, but each task needs slight re-adjustments to the model as well as additional fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Ideally, a multimodal architecture would not only be efficient in its initial training, but also easily adaptable to different tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Finding ways to not only work with different input modalities, but also with different task is crucial towards building a more general AI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A promising approach in that direction is few shot learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The following section presents Flamingo (Alayrac et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022), a few shot multimodal architecture developed by Google which comprises key innovations such as handling arbitrarily interleaved vislang sequences as inputs, as well as ways to effectively combine pre-trained vision-only and language-only models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As such, it is a visually conditioned autoregressive text generation model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='73 demonstrates Flamingos capabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' It can function as chat bot, describe pictures, work with image sequences (videos) and in doing so, simply needs a few prompts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' At the heart of the model is a large language model, Chinchilla (Hoffmann et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2022), with 70B parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Large language models such as GPT-3 (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2020), as their name suggests, can be trained on a large amount of text data which gives them impressive text generative capabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' However, multimodal generative modelling presents some specific challenges not present in language-only modelling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' First of all, training large language models is expensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Hence, it is paramount to work with a pre-trained version, but trying to teach a large language model the means to work with visual inputs, as well, has the potential to deteriorate or destabilize the pre-trained model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Second, large language models can suffer from memory constraints that are potentially severely aggravated by simply adding high-dimensional visual data into an input sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Third, good generalist capabilities typically require a huge amount of heterogeneous training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' There might not exist enough 172 3 Multimodal architectures FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='73: Alayrac et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022): Flamingo Prompt-Output-Examples labelled image-caption-pair data to successfully accomplish training a capable few shot learning model in the vision-and-language domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' To train Flamingo, the authors solve these challenges by foremost exploring ways to generate their own web-scraped multimodal data set similar to existing ones in the language-only domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Furthermore, they use a perceiver architecture (Jaegle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=', 2021a) that resamples inputs into a fixed amount of visual tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Finally, the self-attention layers of the language model are kept frozen during training while cross-attention layers are interleaved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A gating mechanism ensures that those new cross-attention layers do not interfere at model initialization, thereby improving stability and final performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='74 shows the fundamental architecture of Flamingo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' A pre-trained vision model as well as a pre-trained language model are frozen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Together they built the cornerstones of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The vision model is pre-trained using a contrastive text-image approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Its role is to extract features such as colour, shape, nature and the position of objects - typical semantic spatial features that one would use in querying.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The language model is an existing pre-trained language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' On top of those frozen parts, the authors add a perceiver-resampler and gated cross-attention layers as learnable architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The perceiver-resampler turns the outputs of the vision model into a fix set of visual tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Those visual tokens are then used to create cross-attention layers which are interleaved into the frozen language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' As a result, Flamingo pandas: 3 dogs: 2 giraffes: 4 I like reading ,myfavouriteplayis Dreams frommy ,myfavorite book is Hamlet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' I also like Father.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' What happens tothe man after hitting the he falls down.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' ball?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Answer: This is a picture of two teddy Pbears on themoon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' This is an apple with a sticker on it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' What are they doing?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' What does the sticker say?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' PThey are having a conversation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' PThe sticker says"iPod".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' What object are they using?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' What is the common thing Where is the photo taken?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' PItlooks likea computer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' about these three images?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=" It looks like it's taken in a Is this surprising?" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' PThey are allflamingos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' P backyard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' P- Yes, it is surprising.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' What is the difference between Do you think it is printed or handwritten?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' Why is this picture surprising these three images?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' to you?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=" PIt looks like it's handwritten." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' The first one is a cartoon, the I think it is surprising because second one is a real flamingo, What color is the sticker?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' teddy bears are not usually and the third one is a 3D model of a flamingo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=" P It's white." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' P found on the moon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' P3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='5 Models for both modalities 173 FIGURE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content='74: Alayrac et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6tE4T4oBgHgl3EQfCAsz/content/2301.04856v1.pdf'} +page_content=' (2022): Flamingo Model Structure can model the likelihood of some text y interleaved with a sequence of images or videos x as p(y|x) = L � l=1 p(yl|y 1 T. This behavior is consistent with the in-plane A-type layered AFM order in CrSBr24. +We note that the saturating field is lower than standard exfoliated CrSBr samples due to a built-in +strain which we determine to be ≈ 0.9 % from the Raman spectra (Methods, and Extended Data +Fig. 1). Using the difference in resistance between the FM (Rp) and AFM (Rap) states, we find the +tunneling magnetoresistance ratio to be TMR (%) = +������ +�� +≈ 3100 %, on par with other 2D A- +type AFM tunnel junctions19-21, albeit at much higher operating temperature. +Strain Switching MTJ +When the piezo voltage is increased, the TMR decreases dramatically (Fig. 2a). +Furthermore, the shape of the tunneling magnetoresistance curves evolves from a giant, purely +negative magnetoresistance (i.e., decreasing resistance with increasing field) at low strain to small +positive MR at high strain (Extended Data Fig. 2), with complex, hysteretic behavior in between, +e.g., the curve at 5 V in Fig. 2A. The large decrease in TMR and switching from negative to +positive magnetoresistance implies that the interlayer magnetic coupling is switched from AFM to +FM at large strain. This picture is confirmed by comparison of the strain-dependent +photoluminescence (PL) with the magnetoresistance. The PL shows the characteristic red-shift +from the strain induced AFM to FM phase transition, as demonstrated in a previous report18, which +is concurrent with the large changes in tunneling magnetoresistance (Figs. 2b-c). The close +correspondence between the magneto-PL and tunneling magnetoresistance is a consequence of the +coupling of spin and charge in magnetic semiconductors, which forbids or allows interlayer +electronic hybridization and tunneling in the AFM and FM states, respectively. +In the low-strain state, the A-type AFM structure creates tunnel barriers composed of spin +filters with alternating spin orientation. In the FM state, however, the tunnel barrier is uniform for +all layers, i.e., all spin filters are aligned in the same direction. As a result, applying a saturating +magnetic field at low strains strongly enhances the tunneling current with respect to the AFM state +(top panel, Fig. 2d). At high strains, however, there is little difference between the zero- and high +magnetic field tunneling behavior, as expected for a FM tunnel barrier25 (Fig. 2d, bottom). The +combination of optical and tunneling measurements unambiguously prove that the strain-induced +AFM to FM phase transition is the cause of the large tunneling magnetoresistance switching, +excluding trivial origins such as contact failure during the straining process. +We realized strain switching of the MTJ at zero magnetic field. Figure 3a shows the +tunneling resistance as the piezo voltage is continually increased. At around 5 V, the sample +experiences a switch from AFM to FM states accompanied by a sharp drop in resistance. This +strain-induced phase transition generates a TMR ratio of ≈ 2700 %, comparable to the field- +induced TMR in the AFM state. When the tension is released, the resistance recovers to its original +value. The observed hysteresis between up and down strain sweeps is likely due to a combination +of the piezo stack hysteresis and hysteresis in the first-order magnetic phase transition itself. This +switching operation is robust over many cycles, with no obvious slipping or degradation over the +entire measurement (> 50 strain sweeps). +The strain-switching operation of the MTJ persists to much higher temperature than other +2D MTJs19-22,25-27. Figure 3b shows tunneling magnetoresistance vs strain cycles at select + +temperatures. At higher temperatures, the transition between low and high tunneling +magnetoresistance states becomes broader, but a large strain switching ratio is maintained. As +shown in Fig. 3c, the zero-field strain-induced TMR exceeds 10,000 % at 30 K and remains above +100 % up to ≈ 140 K. Interestingly, a dome of positive magnetoresistance as a function of field +can still be induced by a large strain at 155 K, well above the Neel temperature of 132 K reported +in previous studies24,28,29 (Fig. 3d). A likely explanation is that the enhancement of the interlayer +FM exchange induces a long-range ordering of the previously reported intermediate FM (iFM) +phase where the individual layers are ferromagnetically ordered, but the interlayer coupling +remains paramagnetic29. +Strain programmable layer-dependent magnetism +An intriguing feature of the strain-dependent TMR sweeps is that there are multiple +resistance jumps during the AFM-FM phase transition, indicating the formation of multiple +magnetic domains in the junction area of about 500 x 500 nm2. These domains are also evident +from the complex, hysteretic behavior observed in the field dependent TMR measurements (Fig. +2a, 5 V). Similar magnetic domain behavior is observed in both the nanoscale junction region and +across several microns of the sample in magneto-PL (Extended Data Fig. 3). These results suggest +the formation of vertical instead of lateral magnetic domains during the phase transition. The +domains may arise from small vertical strain gradients. Thus, near the critical strain of the magnetic +phase transition, the interlayer coupling can be FM for some layers and AFM for others. These +layer-wise magnetic domains could serve as individual magnetic memory states which can be +precisely manipulated by strain. +To explore active control of layer magnetization flipping, we set the static strain near the +phase transition and then apply strain pulses with a small and controllable amplitude VPAC (see +Figure 4a inset). Figure 4a shows the tunneling current over time as VPAC is increased from 5 mV +to 0.25 V. As the pulse reaches an amplitude of ≈ 24 mV, corresponding to a strain of only ≈ +0.0008 %, the amplitude of tunneling current pulses jumps into a distinctly stable state (left-most +purple arrow in Fig. 4a). This indicates the MTJ switches between two magnetization states with +the strain pulse actively flipping the magnetization direction of individual layers. Calculating the +gauge factor, GF = +∆� +� +� , gives an exceptionally large value of ≈ 3500, among the largest value +reported in any system30,31. +By increasing the magnitude of the strain pulse, the number of layers whose magnetization +can be flipped also increases. This is evidenced by the additional distinct jumps in tunneling current +with increasing pulse amplitude (purple arrows in Fig. 4a). With a large enough strain pulse, the +static state current abruptly increases, indicating a change in the static magnetic configuration. +This behavior is completely different than what is observed in the purely FM or purely AFM states, +where increasing strain pulse magnitude only produces small, continuous changes at a gauge factor +three orders of magnitude smaller, and with no change in the static current (Extended Data Fig. 4). +Therefore, we conclude that the strain pulse switching observed in Fig. 4a arises from changing +the vertical domain structure of the mixed magnetic states. These results demonstrate that multiple +individual magnetic domains, including the static magnetic state, can be controlled by applying +extremely small strain pulses. +Stochastic domain switching + + +The demonstrated ability to switch the layer-dependent magnetization suggests that strain +can tune the MTJ into a regime where the AFM and FM interlayer couplings are extremely close +in energy. Starting from a stable magnetic domain structure, we increase the static strain, VPDC, by +14 mV, as indicated by the red arrow in top panel of Fig. 4b. In such a condition, the tunneling +current proceeds to fluctuate between two values (Fig. 4b, bottom). By decreasing the piezo +voltage back to the original value (blue arrow in Fig. 4b, top), the tunneling current returns to a +stable value. The current fluctuations can be reliably turned on and off, as demonstrated. To our +knowledge, this is the first realization of p-bit type operation using a vdW MTJ. This functionality +is enabled by the unique ability of strain to finely and continuously tune the energy barrier between +parallel and anti-parallel spin configurations, enabling in-situ switching from stable, MRAM type +to stochastic, p-bit type domains (Fig. 4c). +By defining the lower current state as a 0 and the higher current state as a 1, we can convert +the data to a binary sequence and analyze how the statistics of the domain switching respond to +external control knobs, i.e. applied bias voltage and strain. We find that increasing the bias voltage +applied to the tunnel junction leads to a large increase in the switching rate (Fig. 4d). Intriguingly, +no switching is observed when a current of similar magnitude flows in the opposite direction +(Extended Data Fig. 5). This bias-polarity dependence implies that heating is not the origin of the +increased switching rate. Instead, the data suggests that the sample has an asymmetric vertical +magnetic domain structure which creates a difference in spin polarization and thus spin transfer +torque effects when the current is passed in opposite directions12 (Extended Data Fig. 5). Whether +such an asymmetric domain structure can give rise to exchange bias32, magnetic ratchet effect33, +and other spintronics physics within a single crystal is a fascinating direction for future studies. + +The relatively high Neel temperature (TN =132 K) of CrSBr in comparison to other 2D A- +type AFMs creates opportunities for potential device applications operating above liquid nitrogen +temperature. Figure 4e shows the response function (ρ) of the MTJ as a function of the static piezo +voltage with a starting value near the strain-induced phase transition at 85 K. The response function +is calculated by converting the MTJ output to a binary sequence and calculating the average over +the entire time window. Therefore, a response function value of 0 or 1 indicates a stable magnetic +domain, while a value of 0.5 indicates equal fluctuations between the two stable states. The ability +to finely tune the response function should enable both random number generation at ρ = 0.5 and +a biased Bernoulli sequence at higher or lower values, which can be important for applications +dealing with Ising and probabilistic computing12. We further note that the applied bias voltage may +also be used to tune the response function by increasing or decreasing the switching rate, +potentially providing fine control near the edges of the sigmoidal curve, while also enabling +interaction between multiple p-bits. In principle, the two independent control parameters (strain +and bias voltage) could also offer independent tuning of the effective temperature and energy +landscape of the p-bit, thereby allowing direct stochastic annealing of a p-bit system. Such a +scheme could significantly reduce the circuit complexity required to realize a large-scale analog +p-bit annealer, though additional study is needed to establish the full mapping between our two- +dimensional voltage landscape and the statistical mechanical state space of the p-bit dynamics. +To test the stochasticity of our device, we analyze the switching data taken when ρ ≈ 0.5, +generating a binary sequence with near equal 1s and 0s, as shown in Figs. 4f-g. Since the lock-in +detection scheme reads the current much faster than the domain switching rate, we sample the raw +data at a frequency which is slower than the calculated switching rate to prevent non-random runs +of 1s and 0s (see discussion in Supplementary Information). We tested the data using the NIST + +test suite (Fig. 4g) and by analyzing the rise and dwell time of the switching events, which shows +that the device spends equal amounts of time in the 0 and 1 state within the experimental error +(Supplementary Information). These analyses combined with their physical origin strongly +suggests that the metastable states switch stochastically, thereby acting as a random number +generator. + +In conclusion, we have demonstrated that strained single crystal CrSBr offers a powerful +platform for realizing zero-field programmable spintronic devices down to the atomically thin limit +(Extended Data Fig. 6). Due to the versatile nature of vdW heterostructures, our results create a +new path for various other programmable 2D quantum devices. For instance, replacing the graphite +contacts with superconducting ones could enable field-free control of magnetic Josephson +junctions34-37 and superconducting diode effects38-40. Moreover, the ability to switch the layer- +dependent magnetization and vertical magnetic domain structure creates unprecedented +opportunities to precisely vary the length of the FM and AFM tunnel barriers in-situ without +significantly changing the overall thickness of the insulating CrSBr barrier layer. This capability +could provide a new platform for exploring exotic phenomena that have been proposed in +superconductor/ferromagnetic junctions with inhomogeneous magnetization such as spin triplet +correlations. More generally, our clamping and strain technique greatly expands the accessible +strain range for cryogenic transport experiments on 2D devices, which could enable exciting +discoveries on the emergent quantum phenomena in vdW heterostructures including moiré systems. +Methods +Device fabrication and strain application +To prepare the strain substrate, we first cut transparent 20 µm thick polyimide into strips and +epoxied them onto 2D flexure sample plates produced by Razorbill instruments† using Stycast +2850 FT epoxy. The distance between the edge of the epoxy on either side of the gap was less than +200 µm to enable large strains. +Bulk CrSBr crystals were grown by the same method detailed previously28. The bulk CrSBr and +graphite crystals were exfoliated onto PDMS substrates using standard methods and thin (~ 10 nm) +flakes were identified by optical contrast. The MTJs were then assembled through a dry transfer +technique with a stamp consisting of a polypropylene carbonate (PPC) film spin coated onto a +polydimethylsiloxane (PDMS) cylinder. The flakes were picked up in the following order before +being deposited onto the polyimide substrate: top graphite, CrSBr, bottom graphite. The long axis +of the CrSBr flake was aligned with the strain axis for consistency with the previous studies18. +After depositing the MTJ heterostructure, the window clamping pattern and electrical contacts to +the two graphite contacts were fabricated using standard electron beam lithography techniques +with a metal thickness of 7 and 70 nm Cr and Au, respectively. Then, the sample plate was screwed +into the same symmetric three-piezo strain cell used previously18,23 for strain experiments on bulk +crystals and our previous experiments on strained CrSBr. +To calibrate the strain during the experiment, we used the same Raman shift rate of the mode near +~ 346 cm-1 that we determined in the previous study2. We found that there was a rather large built- +in strain of ~ 0.9 %, which is consistent with the small saturating field in the out-of-plane direction. + +† Certain commercial processes and software are identified in this article to foster understanding. Such identification +does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it +imply that the processes and software identified are necessarily the best available for the purpose. + +The observation that the strain-induced phase transition occurs at negative piezo voltages at lower +temperature is consistent with a thermally induced built-in strain which increases with cooling. +Optical measurements: +Optical measurements were performed using a backscattering geometry in a closed-cycle helium +cryostat (Opticool by Quantum Design) with a nominal sample temperature of 60 K. An objective +lens focused 632.8 nm light from a He/Ne laser to a spot size of ~ 1 µm. For Raman measurements, +a laser power of 200 µW was used and the collected signal was dispersed using a 1800 mm-1 +groove-density grating and detected by an LN-cooled charge-coupled device (CCD) with an +integration time of 210 seconds. BragGrateTM notch filters were used to filter out Rayleigh +scattering down to ~10 cm-1. A roughly linear background originating from weak polyimide +photoluminescence was subtracted to increase the accuracy of the fitting results. For +photoluminescence measurements, we used a laser power of 50 µW focused by the same objective. +The collected light was dispersed by a 600 mm-1 groove-density grating and detected by the same +CCD with a 20 second integration time. +Transport measurements: +Except for the data presented in Extended Data Fig. 6, the transport measurements were performed +in the same measurement conditions (Opticool by Quantum Design) as the optical ones, enabling +direct comparison between the observed phenomena. The data shown in Figures 1-3 and 4b are +taken using standard two terminal DC measurements with a Keithley 2450, while the rest of the +data in Figure 4 are taken using AC detection with a DC offset voltage applied by a Zurich +Instruments HF2 lock-in amplifier. The current was amplified by a current preamplifier (DL +Instruments; Model 1211) with a sensitivity of 1 V/10−6 A. For the switching data used in Fig. 4E- +F and the stochasticity analysis, a time constant of 5.082 ms with a fourth-order filter was used, +which was found to give the best time resolution while maintaining a high signal to noise ratio. +The current was amplified by a current preamplifier (DL Instruments; Model 1211) with a +sensitivity of 1 V/10−6 A. +The 6L device in Extended Data Fig. 6 was measured in a PPMS DynaCool cryostat by Quantum +Design. The data in Fig. S6a-c were taken using the same AC detection scheme, but with an SR860 +lock-in amplifier. The switching data in Fig. S6d-e were obtained using a constant current +measurement scheme, which was achieved by putting a 100 MΩ resistor in series with the device. +The resistance signal was then pre-amplified by the differential-ended mode of SR560 with 20 +times amplification. +References: +1 +Baibich, M. N. et al. Giant Magnetoresistance of (001)Fe/(001)Cr Magnetic +Superlattices. Physical Review Letters 61, 2472-2475 (1988). +2 +Binasch, G., Grünberg, P., Saurenbach, F. & Zinn, W. Enhanced magnetoresistance in +layered magnetic structures with antiferromagnetic interlayer exchange. Physical Review +B 39, 4828-4830 (1989). +3 +Žutić, I., Fabian, J. & Das Sarma, S. Spintronics: Fundamentals and applications. Reviews +of Modern Physics 76, 323-410 (2004). +4 +Dieny, B. et al. Giant magnetoresistive in soft ferromagnetic multilayers. Physical +Review B 43, 1297-1300 (1991). + +5 +Julliere, M. Tunneling between ferromagnetic films. Physics letters. 54, 225 (1975). +6 +Moodera, J. S., Kinder, L. R., Wong, T. M. & Meservey, R. Large Magnetoresistance at +Room Temperature in Ferromagnetic Thin Film Tunnel Junctions. Physical Review +Letters 74, 3273-3276 (1995). +7 +Miyazaki, T., Tezuka, N. Giant magnetic tunneling effect in Fe/Al2O3/Fe junction. +Journal of magnetism and magnetic materials 139, L231 (1995). +8 +Yuasa, S., Nagahama, T., Fukushima, A., Suzuki, Y. & Ando, K. Giant room- +temperature magnetoresistance in single-crystal Fe/MgO/Fe magnetic tunnel junctions. +Nature Materials 3, 868-871 (2004). +9 +Parkin, S. S. P. et al. Giant tunnelling magnetoresistance at room temperature with MgO +(100) tunnel barriers. Nature Materials 3, 862-867 (2004). +10 +Ikeda, S. et al. Tunnel magnetoresistance of 604% at 300K by suppression of Ta +diffusion in CoFeB∕MgO∕CoFeB pseudo-spin-valves annealed at high temperature. +Applied Physics Letters 93, 082508 (2008). +11 +Borders, W. A. et al. Integer factorization using stochastic magnetic tunnel junctions. +Nature 573, 390-393 (2019). +12 +Safranski, C. et al. Demonstration of Nanosecond Operation in Stochastic Magnetic +Tunnel Junctions. Nano Letters 21, 2040-2045 (2021). +13 +Bapna, M. et al. Magnetostatic effects on switching in small magnetic tunnel junctions. +Applied Physics Letters 108, 022406 (2016). +14 +Mizrahi, A. et al. Neural-like computing with populations of superparamagnetic basis +functions. Nature Communications 9 (2018). +15 +Rippard, W., Heindl, R., Pufall, M., Russek, S. & Kos, A. Thermal relaxation rates of +magnetic nanoparticles in the presence of magnetic fields and spin-transfer effects. +Physical Review B 84 (2011). +16 +Camsari, K. Y., Sutton, B. M. & Datta, S. p-bits for probabilistic spin logic. Applied +Physics Reviews 6, 011305 (2019). +17 +Bhatti, S. et al. Spintronics based random access memory: a review. Materials Today 20, +530-548 (2017). +18 +Cenker, J. et al. Reversible strain-induced magnetic phase transition in a van der Waals +magnet. Nature Nanotechnology 17, 256-261 (2022). +19 +Song, T. et al. Giant tunneling magnetoresistance in spin-filter van der Waals +heterostructures. Science 360, 1214-1218 (2018). +20 +Wang, Z. et al. Very large tunneling magnetoresistance in layered magnetic +semiconductor CrI3. Nature Communications 9 (2018). +21 +Kim, H. H. et al. One Million Percent Tunnel Magnetoresistance in a Magnetic van der +Waals Heterostructure. Nano Letters 18, 4885-4890 (2018). +22 +Klein, D. R. et al. Probing magnetism in 2D van der Waals crystalline insulators via +electron tunneling. Science 360, 1218-1222 (2018). +23 +Hicks, C. W., Barber, M. E., Edkins, S. D., Brodsky, D. O. & Mackenzie, A. P. +Piezoelectric-based apparatus for strain tuning. Review of Scientific Instruments 85, +065003 (2014). +24 +Telford, E. J. et al. Layered Antiferromagnetism Induces Large Negative +Magnetoresistance in the van der Waals Semiconductor CrSBr. Advanced Materials 32, +2003240 (2020). + +25 +Wang, Z. et al. Magnetization dependent tunneling conductance of ferromagnetic +barriers. Nature Communications 12 (2021). +26 +Cai, X. et al. Atomically Thin CrCl3: An In-Plane Layered Antiferromagnetic Insulator. +Nano Letters 19, 3993-3998 (2019). +27 +Wang, Z. et al. Determining the phase diagram of atomically thin layered +antiferromagnet CrCl3. Nature Nanotechnology 14, 1116-1122 (2019). +28 +Scheie, A. et al. Spin Waves and Magnetic Exchange Hamiltonian in CrSBr. Advanced +Science 9, 2202467 (2022). +29 +Lee, K. et al. Magnetic Order and Symmetry in the 2D Semiconductor CrSBr. (2020). +30 +Wu, J. M. et al. Ultrahigh Sensitive Piezotronic Strain Sensors Based on a ZnSnO3 +Nanowire/Microwire. ACS Nano 6, 4369-4374 (2012). +31 +Yan, W. et al. Giant gauge factor of Van der Waals material based strain sensors. Nature +Communications 12 (2021). +32 +Meiklejohn, W. H. & Bean, C. P. New Magnetic Anisotropy. Physical Review 102, 1413- +1414 (1956). +33 +Lavrijsen, R. et al. Magnetic ratchet for three-dimensional spintronic memory and logic. +Nature 493, 647-650 (2013). +34 +Gingrich, E. C. et al. Controllable 0–π Josephson junctions containing a ferromagnetic +spin valve. Nature Physics 12, 564-567 (2016). +35 +Ai, L. et al. Van der Waals ferromagnetic Josephson junctions. Nature Communications +12 (2021). +36 +Idzuchi, H. et al. Unconventional supercurrent phase in Ising superconductor Josephson +junction with atomically thin magnetic insulator. Nature Communications 12 (2021). +37 +Kang, K. et al. van der Waals π Josephson Junctions. Nano Letters (2022). +38 +Narita, H. et al. Field-free superconducting diode effect in noncentrosymmetric +superconductor/ferromagnet multilayers. Nature Nanotechnology (2022). +39 +Ando, F. et al. Observation of superconducting diode effect. Nature 584, 373-376 (2020). +40 +Wu, H. et al. The field-free Josephson diode in a van der Waals heterostructure. Nature +604, 653-656 (2022). + +Acknowledgements: We thank Xuetao Ma and Yen-Cheng Kung for fabrication advice, G.C. +Adam, W.A. Borders, and J. J. Mcclelland for proofreading the paper, and John Stroud and +Heonjoon Park for their help during the initial stages of the project. The strain controlled optical +measurement is mainly supported by DE-SC0018171. The strain-controlled tunneling experiment +is mainly supported by Air Force Office of Scientific Research (AFOSR) Multidisciplinary +University Research Initiative (MURI) program, grant no. FA9550- 19-1-0390. CrSBr crystal +synthesis is supported by the Center on Programmable Quantum Materials, an Energy Frontier +Research Center funded by the U.S. Department of Energy (DOE), Office of Science, Basic Energy +Sciences (BES), under award DE-SC0019443. DGC is supported by the Columbia MRSEC on +Precision-Assembled Quantum Materials (PAQM) (DMR-2011738). XX acknowledges support +from the State of Washington funded Clean Energy Institute and from the Boeing Distinguished +Professorship in Physics. JC acknowledges the Graduate Fellowship from Clean Energy Institute +funded by the State of Washington. ZL and JHC acknowledge the support of the David and Lucile +Packard Foundation. This research was supported by an appointment to the Intelligence +Community Postdoctoral Research Fellowship Program at University of Washington, + +administered by Oak Ridge Institute for Science and Education through an interagency agreement +between the U.S. Department of Energy and the Office of the Director of National Intelligence. + +Author contributions: XX and John C conceived the project. John C performed the optical and +transport measurements with help from Jiaqi C and GD. DO supervised transport measurements +and contributed to fabrication development. John C fabricated the samples with assistance from +HY and ZL. John C, DO, TC, JHC, DX, and XX analyzed the data and interpreted the results. TC, +MWD and DX provided theoretical support. DGC grew the CrSBr crystals with supervision from +XR and XYZ. John C and XX wrote the manuscript with input from all authors. All authors +discussed the results. +Competing interests: John C and XX have applied for a patent based on this work. +Data availability: The datasets generated during and/or analyzed during this study are available +from the corresponding author upon reasonable request. + + +Figures: + + +Figure 1 | Straintronic van der Waals magnetic tunnel junction. a, Schematic of the +magnetic state evolution of the CrSBr tunnel barrier with the application of either magnetic +fields along the easy b axis or in-plane uniaxial strain. The changing magnetic configuration +creates different resistance states when bias is applied between the graphite contacts (grey). +The red and blue arrows denote the spin direction within each layer. b, Schematic of +straintronic MTJ consisting of graphite contacts sandwiching a CrSBr tunnel barrier (blue). +The whole device is fixed by gold clamps to a flexible polyimide substrate (purple) which is +then strained. c, Magnetic field dependence of a MTJ using an ≈ 11 nm CrSBr tunnel barrier +(optical image inset, scale bar 3 µm) at a temperature of 60K. The device is slightly strained +but remains in the AFM state at zero magnetic field. Magnetic field is applied along the hard c +axis, leading to spin canting (inset arrows). + + +Resistance +.H +YYY +KKK +0.4 +0.2 +0 +-2 +-1 +0 +1 +2 +Low resistance +μ.H (T)V += 0.5 V +NN +Bias +T = 60 K +0.8 +V. = -5 V +C +pa +b +High resistance +Top Gr +CrSBr +Bot. Gr + + +Figure 2 | Strain switchable magnetic tunnel junctions. a, Magnetoresistance sweeps at +select piezo voltages with a fixed bias voltage across the MTJ of VB = 0.5 V. The sweeps are +offset for clarity. b, Full strain-dependent tunneling magnetoresistance with the magnetic field +swept from positive to negative. c, Strain dependent photoluminescence intensity plot. The +beam spot was kept fixed on the junction region while the strain was continuously swept. d-e, +Bias dependent tunneling current with magnetic fields of 0 T (blue) and 3 T (green) applied in +the low strain (d) and high strain (e) states. The magnetic state for each curve is depicted in the +inset. All measurements were performed at a temperature of 60 K. + + +Resista +1.38 +-5 V +(eV) +1 +Energy ( +(nA) +OV +1.34 +0 +5V +0.2 +Photon +15 V +PL (a.u.) +-1 +1.3 +V,= 25 V +25 V +600 +2000 +0 +-2 +-1 +0 +1 +2 +0 +5 +10 +15 +20 +25 +-0.4 +-0.2 +0 +0.2 +0.4 +μ.H (T) +Piezo Voltage (V) +Bias Voltage (V)a +b +d +2 +-OT +-3 T +2 +1 +↑个个 +E +(nA) +1.0 +0 +0 +(GΩ) +-2 +-1 +R (GΩ) +V,= -5 V +0.026 +0.89 +ince +2 +0.6 +c +e + + +Figure 3 | Temperature dependent zero-field tunneling resistance switching. a, Tunneling +resistance as a function of piezo voltage. A large TMR change of ≈ 2700 % is observed between +the low and high strain states at 60 K. The change in magnetic state from AFM to FM interlayer +coupling is depicted by the inset spin diagram. b, Piezo-voltage-dependent tunneling resistance +at select temperatures from 30 K to 149 K. v, Temperature dependence of the tunneling +magnetoresistance ratio, defined as TMR (%) = +������ +�� +� 100. d, Magnetic-field dependent +tunneling resistance at 155 K in the low strain (blue) and high strain (red) states. + + +a +10 +d +3.8 +4 +(×105 ( +VBias = 0.02 V +R +T = 155 K +T= 139 K +6 +3.2 += 0.05 V +R +2 +-5 V +a +2.6 +(×105 ( +R +4 +T = 149 K +27.5 V +VBias = 0.03 V +2 +0 +R +3 +-5 +0 +10 +15 +20 +-10 +0 +10 +20 +-2 +-1 +0 +1 +2 +Piezo Voltage (V) +μ.H (T) +Piezo Voltage (V)a 10 +b +C +104 + (×109 Q2) +T= 30 K +T= 60 K +(%) += 0.7 V +μ.H= O T +Ratio +8 +R +103 +0 +=0.5V +TMR +a +(×107 ( +3 +6 +T = 85 K +102 +108 Q2) +V += 0.35 V +20 +60 +100 +140 +R +Temperature (K) + +Figure 4 | Strain control of multiple stable and stochastic layer-dependent magnetic +domains. a, Tunneling current over time as strain pulses of increasing amplitude are applied. +The inset shows the measurement scheme: a small pulse of amplitude VPAC is applied on top +of a static piezo voltage VPDC. The system is initialized by slowly increasing VPDC until the +magnetic phase transition starts to occur. As the pulse amplitude increases, the current +switching stabilizes into discrete states (denoted by the purple arrows). Additionally, the resting +current, i.e. ground state, can be changed by a sufficiently large pulse. b, Tunneling current +over time as the static piezo voltage, VPDC is increased (red arrow) and then decreased (blue +arrow) by .014 V. No strain pulse is applied. Bottom: Finer time resolution data of the domain +fluctuations observed in the top panel. c, Schematic of strain tuning between magnetic domains. +A sufficiently high pulse, VPAC, will flip between AFM and FM domains (left). The fine +adjustment of the static strain lowers the energy difference between AFM and FM domains, +creating a metastable state with stochastic domain switching (right). d, Bias dependence of the +switching rate in the metastable state. The piezo voltage is kept constant during the +measurement. Data from panels A-D are taken at 60 K. e, Response function of a sensitive +magnetic domain as a function of static piezo voltage at a temperature of 85 K. A value of +either 0 or 1 indicates a stable domain. f, Tunneling current (top) and converted binary +sequence (bottom) over time when the response function is near 0.5, indicating equal amount +of fluctuations between the parallel and antiparallel configuration. g, P-values returned by the +NIST random number test suite applied to the binary sequence from f. The black dashed line +indicates a p-value of .01, the threshold for passing the specific test. The sampling time was +.1760 seconds (see Supplementary Information). + +Time (sec) +Time (sec) +c +e +9 +V +PAC + Value +0.1 +p +P +8 +T = 85 K +0.01 +0 +0.02 +0.04 +V +PDC +Stable +AV +(V) +Stochastic +PDC +TestTime (sec) +b +d +48 +3 +It (nA) +4 +(nA) +2 +46 +Current ( +500 +1500 +2500 +3500 +3.6 +Switching +6.5m +Binary +6.4 +0 +0.54 +0.58 +0.62 +25 +0 +20 +40 +60 +80 +100a +V +PDC +。= 250 mV +6.4F +(nA) +PAC +6.2 +Current ( +9400 +9500 +9600 +.=5mV +PAC +6 +2000 +6000 +10000 +14000 +18000 +22000 + +1 + +Extended Data for + +Strain-programmable van der Waals magnetic tunnel junctions + +Authors: John Cenker1, Dmitry Ovchinnikov1, Harvey Yang1, Daniel G. Chica2, Catherine Zhu1, +Jiaqi Cai1, Geoffrey Diederich1,3, Zhaoyu Liu1, Xiaoyang Zhu2, Xavier Roy2, Ting Cao4, +Matthew W. Daniels5, Jiun-Haw Chu1, Di Xiao4,1, Xiaodong Xu1,4,* + + +1 Department of Physics, University of Washington, Seattle, Washington 98195, USA +2 Department of Chemistry, Columbia University, New York, NY 10027 USA +3 Intelligence Community Postdoctoral Research Fellowship Program, University of Washington, +Seattle, WA, USA +4 Department of Materials Science and Engineering, University of Washington, Seattle, +Washington 98195, USA +5 Physical Measurement Laboratory, National Institute of Standards and Technology, +Gaithersburg, MD, 20899, USA + +*Correspondence to: xuxd@uw.edu + + + + + + +2 + + + + + + + + + + + + + + + + + + + + + + + + +Extended Data Fig. 1 | Calibration of strain through Raman spectroscopy. a, Raman scattering +from the P3 phonon taken on the tunnel junction region at a piezo voltage of 0 V. A linear +background originating from the polyimide photoluminescence is subtracted. The narrow linewidth +indicates a homogenous strain. b, Raman intensity plot as a function of piezo voltage. The beamspot +is kept on the junction as the piezo voltage is continually increased. c, Measured strain as a function +of the applied voltage to the strain cell. The strain is calculated by fitting the data from b with +Lorentzian fits and then comparing the peak position to the unstrained value of 346 cm-1 using a +strain shift rate of 4.2 cm-1/% as reported in previous studies. We found that there was a built-in +strain of ~ 0.9 % at the lowest piezo voltage used at this temperature. + + +Intens +V +Rama +S +1.1 +336 +0 +Intensity +0 +300 +0.9 +332 +320 +340 +360 +-5 +0 +5 +10 +15 +2025 +-5 +0 +5 +1015 +25 +Raman Shift (cm-1) +Piezo Voltage (V) +Piezo Voltage (V)a +b +c +300 +1.7 +348 +Shift (cm-1) +(counts) +1.5 +344 +200 +(%) +train +1.3 +340 + +3 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Extended Data Fig. 2 | Magnetoresistance sweeps at select piezo voltages. a-d, +Magnetoresistance sweeps as the field is swept down (blue) and up (black) at select piezo +voltages through the strain-induced layered magnetization flipping. At low strain (a), large +negative magnetoresistance is observed, consistent with AFM order, while small positive +magnetoresistance is observed in the high strain induced FM state (d). In between, complex +and hysteretic magnetic domain behavior is observed. + + +3.8 +V = 10V += 25 V +b +d +Resistance (× 107 ohm) +Resistance (× 107 ohm) +6 +3.7 +5 +3.6 +4 +3.5 +3 +3.41 +2 +-1 +0 +2 +-2 +-1 +0 +μ。H(T) +μ。H(T)8 +a +/=0V +3.5 += 15V +Resistance (× 108 ohm) +6 +3.4 +4 +3.3 +-μ.H ++μ.H +2 +3.2 +0 +3.1 + +4 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Extended Data Fig. 3 | Magneto-photoluminescence mapping of magnetic domains. a-b, +Comparison of tunneling magnetoresistance (a) and integrated intensity from magneto- +photoluminescence (PL) (b) measurements at the same piezo voltage. The correlation of the +curves highlights the connection of the interlayer magnetic coupling to both electronic +tunneling and exciton luminescence. c, Optical image of the device with different spots labeled +by different colors. d-g, Magneto-PL sweeps at each of the spots labeled in (c). The similarities +between spots separated by several microns indicates the presence of vertical, rather than +lateral, magnetic domains. + +-1 +-0.5 +0 +0.5 +Energy (eV) +1.4 +g +Energy (eV) +1.4 +μ.H (T) +1000 +1030 +c +PL (a.u.) +PL (a.u.) +1.3 +Energy (eV) +1.4 +Energy (eV) +1.4 +600 +600 +1.3 +1.3 +-1 +0 +1 +-1 +0 +1 +μH (T) +μ.H (T)Energy (eV) +1.4 +f +Energy (eV) +1.4 +6 +TMR +5 +910 +1000 +4 +PL +PL (a.u.) +1.3 +(a.u.) +1.3 +R +3 +1.4 +Energy (eV) +1.4 +8 +Energy (eV) +b +(a. u.) +PL +7 +600 +600 +Int. intensity +1.3 +1.3 +9 +-1 +0 +1 +-1 +0 +μ,H (T) +μ。H (T) + +5 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Extended Data Fig. 4 | Strain pulse data in the purely FM and AFM states. a, Strain +pulse amplitude dependence in the purely FM state. As VPAC is increased from 0 to 0.5 V, a +continuous change in the current is observed. The calculated gauge factor is ~ 5. b, Change in +tunneling current over time as a strain pulse of 0.5 V is applied in the AFM state. Due to the +very large resistance, the effect of pulses with smaller amplitude cannot be resolved. A gauge +factor of ~ 30 is calculated, but with a large uncertainty due to the high resistance in the AFM +state. No changes to the static current are observed in either FM or AFM states. + + +.= 0.5 V +Current (nA) +3.2 +AFM State (Vppc = -4.5 V) +3.1 +9800 +9900 +10000 +10100 +Time (sec)a +=OV +: 0.5.V +75.2 +Current (nA) +75 +FM State (Vppc = 24.5 V) +1000 +3000 +5000 +b + +6 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Extended Data Fig. 5 | Bias-polarity-dependent stochastic switching indicates asymmetric +vertical magnetic domain structure. a-b, Tunneling current over time of a metastable domain +with a positive (a) and negative (b) bias applied to the MTJ. Despite a similar magnitude of +current, no switching is observed under negative bias, ruling out heating effects. Instead, the +data is consistent with an asymmetric vertical domain structure, as illustrated in (c). A plausible +scenario is that when a positive voltage is applied, the FM layers polarize the tunneling +electrons. These spin polarized electrons apply a spin-transfer torque like effect to the AFM +layers, enhancing the stochastic switching. On the other hand, when a negative bias is applied, +the electrons are not highly polarized and do not exert a spin-transfer torque on the FM layers. + + +Current (nA) +6.8 +6.74 +-620.5 mV +Bias +0 +100 +200 +300 +Time (sec)a +Current (nA) +6.5 +6.4 += 620.5 mV +6.3 +Bias +6 + +7 + + + + + + +Extended Data Fig. 6 | Strain switching in a six-layer MTJ. a, Magnetoresistance sweeps +of a MTJ with a six-layer CrSBr tunnel barrier as the piezo voltage Vp is increased from 32.5 +V to 75 V. The domain behavior at piezo voltages between the low-strain AFM (32.5 V) and +high-strain FM (75 V) states is much simpler than the ~ 16-layer device presented in the main +text, providing additional evidence that vertical, layer-dependent domains are the origin of the +complex hysteretic domains behavior during the magnetic phase transition. The magnetic field +is applied along the a axis at a temperature of 20 K. b-c, Magnetoresistance sweeps in the low +strain AFM (b) and high strain FM (c) states, showing the characteristic switching from +negative to positive MR. The optical image of the device is shown inset in b (scale bar 5 µm). +d-e, Resistance over time at select piezo voltages during the magnetic phase transition. +Stochastic domain switching (d) which can be stabilized by slightly increasing strain e) are +observed. These results highlight the potential for extending the strain-programmable vdW +MTJs to the 2D limit. + + +Resistal +(MQ +64 +V,= 54.3 V +Resistance (M) +Resistance ( +2 +1.68 +32.5 V- +1.56 +R +7.5V +1.64 +0 +0 +7880 +7960 +8040 +μ.H(T) +μ。H(T) +Time (sec)a +b +d +1.73 +75 V += 32.5 V + (MQ) +V. = 52.9 V +Resistance ( +4 +Resistance +1.69 +1.45 +(M2) +1.65 ++ +1.35 +3 +nce +-1 +0 +1 +3280 +3360 +3440 +c +e + +1 + +Supplementary information for + +Strain-programmable van der Waals magnetic tunnel junctions + +Authors: John Cenker1, Dmitry Ovchinnikov1, Harvey Yang1, Daniel G. Chica2, Catherine Zhu1, +Jiaqi Cai1, Geoffrey Diederich1,3, Zhaoyu Liu1, Xiaoyang Zhu2, Xavier Roy2, Ting Cao4, +Matthew W. Daniels5, Jiun-Haw Chu1, Di Xiao4,1, Xiaodong Xu1,4,* + + +1 Department of Physics, University of Washington, Seattle, Washington 98195, USA +2 Department of Chemistry, Columbia University, New York, NY 10027 USA +3 Intelligence Community Postdoctoral Research Fellowship Program, University of Washington, +Seattle, WA, USA +4 Department of Materials Science and Engineering, University of Washington, Seattle, +Washington 98195, USA +5 Physical Measurement Laboratory, National Institute of Standards and Technology, +Gaithersburg, MD, 20899, USA + +*Correspondence to: xuxd@uw.edu + + + + + + +2 + +Supplementary Text: Additional stochasticity analysis of switching data taken near ρ = 0.5 +Since the tunneling current is sampled much faster than the switching rate (~ .14 sec), switching +data collected over 200 seconds was downsampled and tested using 15 tests from the NIST test +suite1. Maurer’s Universal Test was excluded since the binary sequence was not long enough. The +full sampling time dependence is shown below, using a standard threshold p-value of .01. The grey +line indicates the sequence passed all of the 15 considered tests. The red line indicates the average +domain switching time obtained by dividing the total number of switches by the total time window. + +In addition to the NIST test suite, we analyzed the dwell time, i.e. the time between switches, of +the 0 and 1 states. The extracted dwell times are plotted as a histogram for the 0 and 1 states, +following an exponential envelope as expected for a Poisson process. + + +We then plot the logarithm of the histogram bin counts (N) versus the dwell time: + +80 +80 +40 +40 +0 +0 +0 +0.2 +0.4 +0.6 +0.8 +0 +0.2 +0.4 +0.6 +0.8 +Time (sec) +Time (sec)Zero State Dwell Time +One State Dwell Time +200 +200 +160 +160 +120 +120 +untsTests +5 +0 +0 +0.05 +0.1 +0.15 +0.2 +Sampling time (sec)15 +10 +Passed + +3 + + +From the linear fits, we find that the characteristic lifetime, τ, of the 0 and 1 states are τ0 = 159 ± +9 ms and τ1 = 151 ± 9 ms, respectively, where the uncertainty is determined by the standard +deviation of the linear fit. Based on this analysis and the NIST test suite results, we conclude that +the strained MTJ can generate binary sequences with a high degree of randomness. + + + + + + +9 +2 +0 +0.1 +0.3 +0.7 +0.9 +0.1 +0.3 +0.5 +0.7 +0.9 +0.5 +Time (sec) +Time (sec)Zero State +One State +6 +5 +4 +3 +g(N) + +4 + +References: + +1. +Ang, S., Chuchill, S., NIST Test Suite, GitHub Repository, +https://github.com/stevenang/randomness_testsuite (2017) + diff --git a/7tE2T4oBgHgl3EQfPgY3/content/tmp_files/load_file.txt b/7tE2T4oBgHgl3EQfPgY3/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..11d01dac90254874b667415a0b94248b14244bd7 --- /dev/null +++ b/7tE2T4oBgHgl3EQfPgY3/content/tmp_files/load_file.txt @@ -0,0 +1,782 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf,len=781 +page_content='Title: Strain-programmable van der Waals magnetic tunnel junctions Authors: John Cenker1, Dmitry Ovchinnikov1, Harvey Yang1, Daniel G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Chica2, Catherine Zhu1, Jiaqi Cai1, Geoffrey Diederich1,3, Zhaoyu Liu1, Xiaoyang Zhu2, Xavier Roy2, Ting Cao4, Matthew W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Daniels5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Jiun-Haw Chu1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Di Xiao4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Xiaodong Xu1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='* 1 Department of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' University of Washington,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Seattle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Washington 98195,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' USA 2 Department of Chemistry,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Columbia University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' New York,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' NY 10027 USA 3 Intelligence Community Postdoctoral Research Fellowship Program,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' University of Washington,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Seattle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' WA,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' USA 4 Department of Materials Science and Engineering,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' University of Washington,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Seattle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Washington 98195,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' USA 5 Physical Measurement Laboratory,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' National Institute of Standards and Technology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Gaithersburg,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' MD,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 20899,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' USA Corresponding author’s email: xuxd@uw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='edu Abstract: The magnetic tunnel junction (MTJ) is a backbone device for spintronics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Realizing next generation energy efficient MTJs will require operating mechanisms beyond the standard means of applying magnetic fields or large electrical currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Here, we demonstrate a new concept for programmable MTJ operation via strain control of the magnetic states of CrSBr, a layered antiferromagnetic semiconductor used as the tunnel barrier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Switching the CrSBr from antiferromagnetic to ferromagnetic order generates a giant tunneling magnetoresistance ratio without external magnetic field at temperatures up to ≈ 140 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' When the static strain is set near the phase transition, applying small strain pulses leads to active flipping of layer magnetization with controlled layer number and thus magnetoresistance states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Further, finely adjusting the static strain to a critical value turns on stochastic switching between metastable states, with a strain-tunable sigmoidal response curve akin to the stochastic binary neuron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Our results highlight the potential of strain- programmable van der Waals MTJs towards spintronic applications, such as magnetic memory, random number generation, and probabilistic and neuromorphic computing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Main Text: The control and readout of discrete magnetic states lies at the foundation of the fields of spintronics and modern information storage1-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Standard spintronic devices utilize the spin filtering phenomenon, where spin-selective transport processes, such as electron tunneling through magnetic layers, create spin polarization and magnetoresistance5-10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Controlling the energetics and stability of the magnets in such devices, known as magnetic tunnel junctions (MTJ), has enabled many important technological advancements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' For instance, switching the orientation of the magnets from anti-parallel (AP) to parallel (P) in stable MTJs results in large changes to the tunneling magnetoresistance (TMR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' This behavior is the conceptual basis for magnetic random- access memory (MRAM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' On the other hand, when the magnetic layers are thinned so that the energy difference between P and AP states is small, the magnetic order becomes unstable and stochastic switching between the two states is observed11-15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Such stochastic MTJs can serve as probabilistic bits (p-bits), the fundamental building blocks for the emerging fields of probabilistic and neuromorphic computing11,16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Despite the great successes of conventional MTJs in both conventional and probabilistic computing schemes, writing the magnetic memory bits in current MRAM schemes tends to rely on energy-intensive means such as the application of large magnetic fields or currents17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Moreover, since the stability of the MTJ is fixed by the growth thickness, it is difficult to switch from stable MRAM operation to unstable p-bit functionality in the same device.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The recent discovery18 of a reversible strain-induced magnetic phase transition in the air stable A-type layered antiferromagnetic (AFM) semiconductor CrSBr could offer both a new material platform and operating principle for controlling atomically thin MTJs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The A-type AFM configuration consists of van der Waals (vdW) layers with intralayer ferromagnetic (FM) order and interlayer AFM coupling along the stacking direction, forming intrinsic spin filters that can generate exceptionally large TMR19-22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' These previous works have demonstrated that applying an external magnetic field to A-type antiferromagnets with weak interlayer exchange switches the magnetic state from the AFM, high resistance configuration to intermediate states with layer- dependent interlayer coupling, and then finally to a low resistance, field-induced FM state (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 1a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' In comparison to the previously studied devices which require continuous application of magnetic field to control the magnetic states, strain could provide an exceptionally energy-efficient operating mechanism as it requires essentially no current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Moreover, the fine, continuous, and reversible tuning of the interlayer exchange could enable unprecedented control of the layer- dependent magnetic structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Here, we demonstrate a strain-controlled vdW MTJ with programmable magneto- resistance states and stochastic switching, charting a path towards new memory and computing technologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The schematic for our strain device is shown in Figure 1b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The vdW MTJ heterostructure is composed of a CrSBr tunnel barrier sandwiched between two narrow graphite contacts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The whole MTJ is fixed to a stretchable polyimide substrate by a gold clamp with a small (≈ 5 µm) window around the junction (Methods).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' This design ensures a highly efficient strain transfer when the polyimide substrate is stretched by a home-built piezoelectric strain cell18,23, while also allowing for optical spectroscopy measurements of the junction region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The strain is applied along the crystallographic a axis for consistency with previous experiments18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The data in the main text is taken on a MTJ with an ≈ 11 nm tunnel barrier, but the technique is compatible with CrSBr flakes of any thickness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Figure 1c shows the tunneling magnetoresistance as a function of magnetic field (µ0H) applied along the c axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' In the low strain condition with piezo voltage (Vp) of -5 V, CrSBr is in the AFM state at µ0H = 0 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' As |µ0H| increases, the spins cant from the AFM configuration, gradually increasing the conductivity of the MTJ until it reaches the field-induced FM state with |µ0H| > 1 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' This behavior is consistent with the in-plane A-type layered AFM order in CrSBr24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' We note that the saturating field is lower than standard exfoliated CrSBr samples due to a built-in strain which we determine to be ≈ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='9 % from the Raman spectra (Methods, and Extended Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Using the difference in resistance between the FM (Rp) and AFM (Rap) states, we find the tunneling magnetoresistance ratio to be TMR (%) = ������ �� ≈ 3100 %, on par with other 2D A- type AFM tunnel junctions19-21, albeit at much higher operating temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Strain Switching MTJ When the piezo voltage is increased, the TMR decreases dramatically (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 2a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Furthermore, the shape of the tunneling magnetoresistance curves evolves from a giant, purely negative magnetoresistance (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', decreasing resistance with increasing field) at low strain to small positive MR at high strain (Extended Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 2), with complex, hysteretic behavior in between, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', the curve at 5 V in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 2A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The large decrease in TMR and switching from negative to positive magnetoresistance implies that the interlayer magnetic coupling is switched from AFM to FM at large strain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' This picture is confirmed by comparison of the strain-dependent photoluminescence (PL) with the magnetoresistance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The PL shows the characteristic red-shift from the strain induced AFM to FM phase transition, as demonstrated in a previous report18, which is concurrent with the large changes in tunneling magnetoresistance (Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 2b-c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The close correspondence between the magneto-PL and tunneling magnetoresistance is a consequence of the coupling of spin and charge in magnetic semiconductors, which forbids or allows interlayer electronic hybridization and tunneling in the AFM and FM states, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' In the low-strain state, the A-type AFM structure creates tunnel barriers composed of spin filters with alternating spin orientation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' In the FM state, however, the tunnel barrier is uniform for all layers, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', all spin filters are aligned in the same direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' As a result, applying a saturating magnetic field at low strains strongly enhances the tunneling current with respect to the AFM state (top panel, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 2d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' At high strains, however, there is little difference between the zero- and high magnetic field tunneling behavior, as expected for a FM tunnel barrier25 (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 2d, bottom).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The combination of optical and tunneling measurements unambiguously prove that the strain-induced AFM to FM phase transition is the cause of the large tunneling magnetoresistance switching, excluding trivial origins such as contact failure during the straining process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' We realized strain switching of the MTJ at zero magnetic field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Figure 3a shows the tunneling resistance as the piezo voltage is continually increased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' At around 5 V, the sample experiences a switch from AFM to FM states accompanied by a sharp drop in resistance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' This strain-induced phase transition generates a TMR ratio of ≈ 2700 %, comparable to the field- induced TMR in the AFM state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' When the tension is released, the resistance recovers to its original value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The observed hysteresis between up and down strain sweeps is likely due to a combination of the piezo stack hysteresis and hysteresis in the first-order magnetic phase transition itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' This switching operation is robust over many cycles, with no obvious slipping or degradation over the entire measurement (> 50 strain sweeps).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The strain-switching operation of the MTJ persists to much higher temperature than other 2D MTJs19-22,25-27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Figure 3b shows tunneling magnetoresistance vs strain cycles at select temperatures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' At higher temperatures, the transition between low and high tunneling magnetoresistance states becomes broader, but a large strain switching ratio is maintained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 3c, the zero-field strain-induced TMR exceeds 10,000 % at 30 K and remains above 100 % up to ≈ 140 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Interestingly, a dome of positive magnetoresistance as a function of field can still be induced by a large strain at 155 K, well above the Neel temperature of 132 K reported in previous studies24,28,29 (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 3d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' A likely explanation is that the enhancement of the interlayer FM exchange induces a long-range ordering of the previously reported intermediate FM (iFM) phase where the individual layers are ferromagnetically ordered, but the interlayer coupling remains paramagnetic29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Strain programmable layer-dependent magnetism An intriguing feature of the strain-dependent TMR sweeps is that there are multiple resistance jumps during the AFM-FM phase transition, indicating the formation of multiple magnetic domains in the junction area of about 500 x 500 nm2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' These domains are also evident from the complex, hysteretic behavior observed in the field dependent TMR measurements (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 2a, 5 V).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Similar magnetic domain behavior is observed in both the nanoscale junction region and across several microns of the sample in magneto-PL (Extended Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' These results suggest the formation of vertical instead of lateral magnetic domains during the phase transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The domains may arise from small vertical strain gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Thus, near the critical strain of the magnetic phase transition, the interlayer coupling can be FM for some layers and AFM for others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' These layer-wise magnetic domains could serve as individual magnetic memory states which can be precisely manipulated by strain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' To explore active control of layer magnetization flipping, we set the static strain near the phase transition and then apply strain pulses with a small and controllable amplitude VPAC (see Figure 4a inset).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Figure 4a shows the tunneling current over time as VPAC is increased from 5 mV to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='25 V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' As the pulse reaches an amplitude of ≈ 24 mV, corresponding to a strain of only ≈ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='0008 %, the amplitude of tunneling current pulses jumps into a distinctly stable state (left-most purple arrow in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 4a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' This indicates the MTJ switches between two magnetization states with the strain pulse actively flipping the magnetization direction of individual layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Calculating the gauge factor, GF = ∆� � � , gives an exceptionally large value of ≈ 3500, among the largest value reported in any system30,31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' By increasing the magnitude of the strain pulse, the number of layers whose magnetization can be flipped also increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' This is evidenced by the additional distinct jumps in tunneling current with increasing pulse amplitude (purple arrows in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 4a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' With a large enough strain pulse, the static state current abruptly increases, indicating a change in the static magnetic configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' This behavior is completely different than what is observed in the purely FM or purely AFM states, where increasing strain pulse magnitude only produces small, continuous changes at a gauge factor three orders of magnitude smaller, and with no change in the static current (Extended Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Therefore, we conclude that the strain pulse switching observed in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 4a arises from changing the vertical domain structure of the mixed magnetic states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' These results demonstrate that multiple individual magnetic domains, including the static magnetic state, can be controlled by applying extremely small strain pulses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Stochastic domain switching The demonstrated ability to switch the layer-dependent magnetization suggests that strain can tune the MTJ into a regime where the AFM and FM interlayer couplings are extremely close in energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Starting from a stable magnetic domain structure, we increase the static strain, VPDC, by 14 mV, as indicated by the red arrow in top panel of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 4b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' In such a condition, the tunneling current proceeds to fluctuate between two values (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 4b, bottom).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' By decreasing the piezo voltage back to the original value (blue arrow in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 4b, top), the tunneling current returns to a stable value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The current fluctuations can be reliably turned on and off, as demonstrated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' To our knowledge, this is the first realization of p-bit type operation using a vdW MTJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' This functionality is enabled by the unique ability of strain to finely and continuously tune the energy barrier between parallel and anti-parallel spin configurations, enabling in-situ switching from stable, MRAM type to stochastic, p-bit type domains (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 4c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' By defining the lower current state as a 0 and the higher current state as a 1, we can convert the data to a binary sequence and analyze how the statistics of the domain switching respond to external control knobs, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' applied bias voltage and strain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' We find that increasing the bias voltage applied to the tunnel junction leads to a large increase in the switching rate (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 4d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Intriguingly, no switching is observed when a current of similar magnitude flows in the opposite direction (Extended Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' This bias-polarity dependence implies that heating is not the origin of the increased switching rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Instead, the data suggests that the sample has an asymmetric vertical magnetic domain structure which creates a difference in spin polarization and thus spin transfer torque effects when the current is passed in opposite directions12 (Extended Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Whether such an asymmetric domain structure can give rise to exchange bias32, magnetic ratchet effect33, and other spintronics physics within a single crystal is a fascinating direction for future studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The relatively high Neel temperature (TN =132 K) of CrSBr in comparison to other 2D A- type AFMs creates opportunities for potential device applications operating above liquid nitrogen temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Figure 4e shows the response function (ρ) of the MTJ as a function of the static piezo voltage with a starting value near the strain-induced phase transition at 85 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The response function is calculated by converting the MTJ output to a binary sequence and calculating the average over the entire time window.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Therefore, a response function value of 0 or 1 indicates a stable magnetic domain, while a value of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 indicates equal fluctuations between the two stable states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The ability to finely tune the response function should enable both random number generation at ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 and a biased Bernoulli sequence at higher or lower values, which can be important for applications dealing with Ising and probabilistic computing12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' We further note that the applied bias voltage may also be used to tune the response function by increasing or decreasing the switching rate, potentially providing fine control near the edges of the sigmoidal curve, while also enabling interaction between multiple p-bits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' In principle, the two independent control parameters (strain and bias voltage) could also offer independent tuning of the effective temperature and energy landscape of the p-bit, thereby allowing direct stochastic annealing of a p-bit system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Such a scheme could significantly reduce the circuit complexity required to realize a large-scale analog p-bit annealer, though additional study is needed to establish the full mapping between our two- dimensional voltage landscape and the statistical mechanical state space of the p-bit dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' To test the stochasticity of our device, we analyze the switching data taken when ρ ≈ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5, generating a binary sequence with near equal 1s and 0s, as shown in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 4f-g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Since the lock-in detection scheme reads the current much faster than the domain switching rate, we sample the raw data at a frequency which is slower than the calculated switching rate to prevent non-random runs of 1s and 0s (see discussion in Supplementary Information).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' We tested the data using the NIST test suite (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 4g) and by analyzing the rise and dwell time of the switching events, which shows that the device spends equal amounts of time in the 0 and 1 state within the experimental error (Supplementary Information).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' These analyses combined with their physical origin strongly suggests that the metastable states switch stochastically, thereby acting as a random number generator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' In conclusion, we have demonstrated that strained single crystal CrSBr offers a powerful platform for realizing zero-field programmable spintronic devices down to the atomically thin limit (Extended Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Due to the versatile nature of vdW heterostructures, our results create a new path for various other programmable 2D quantum devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' For instance, replacing the graphite contacts with superconducting ones could enable field-free control of magnetic Josephson junctions34-37 and superconducting diode effects38-40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Moreover, the ability to switch the layer- dependent magnetization and vertical magnetic domain structure creates unprecedented opportunities to precisely vary the length of the FM and AFM tunnel barriers in-situ without significantly changing the overall thickness of the insulating CrSBr barrier layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' This capability could provide a new platform for exploring exotic phenomena that have been proposed in superconductor/ferromagnetic junctions with inhomogeneous magnetization such as spin triplet correlations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' More generally, our clamping and strain technique greatly expands the accessible strain range for cryogenic transport experiments on 2D devices, which could enable exciting discoveries on the emergent quantum phenomena in vdW heterostructures including moiré systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Methods Device fabrication and strain application To prepare the strain substrate, we first cut transparent 20 µm thick polyimide into strips and epoxied them onto 2D flexure sample plates produced by Razorbill instruments† using Stycast 2850 FT epoxy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The distance between the edge of the epoxy on either side of the gap was less than 200 µm to enable large strains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Bulk CrSBr crystals were grown by the same method detailed previously28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The bulk CrSBr and graphite crystals were exfoliated onto PDMS substrates using standard methods and thin (~ 10 nm) flakes were identified by optical contrast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The MTJs were then assembled through a dry transfer technique with a stamp consisting of a polypropylene carbonate (PPC) film spin coated onto a polydimethylsiloxane (PDMS) cylinder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The flakes were picked up in the following order before being deposited onto the polyimide substrate: top graphite, CrSBr, bottom graphite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The long axis of the CrSBr flake was aligned with the strain axis for consistency with the previous studies18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' After depositing the MTJ heterostructure, the window clamping pattern and electrical contacts to the two graphite contacts were fabricated using standard electron beam lithography techniques with a metal thickness of 7 and 70 nm Cr and Au, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Then, the sample plate was screwed into the same symmetric three-piezo strain cell used previously18,23 for strain experiments on bulk crystals and our previous experiments on strained CrSBr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' To calibrate the strain during the experiment, we used the same Raman shift rate of the mode near ~ 346 cm-1 that we determined in the previous study2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' We found that there was a rather large built- in strain of ~ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='9 %, which is consistent with the small saturating field in the out-of-plane direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' † Certain commercial processes and software are identified in this article to foster understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the processes and software identified are necessarily the best available for the purpose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The observation that the strain-induced phase transition occurs at negative piezo voltages at lower temperature is consistent with a thermally induced built-in strain which increases with cooling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Optical measurements: Optical measurements were performed using a backscattering geometry in a closed-cycle helium cryostat (Opticool by Quantum Design) with a nominal sample temperature of 60 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' An objective lens focused 632.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='8 nm light from a He/Ne laser to a spot size of ~ 1 µm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' For Raman measurements, a laser power of 200 µW was used and the collected signal was dispersed using a 1800 mm-1 groove-density grating and detected by an LN-cooled charge-coupled device (CCD) with an integration time of 210 seconds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' BragGrateTM notch filters were used to filter out Rayleigh scattering down to ~10 cm-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' A roughly linear background originating from weak polyimide photoluminescence was subtracted to increase the accuracy of the fitting results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' For photoluminescence measurements, we used a laser power of 50 µW focused by the same objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The collected light was dispersed by a 600 mm-1 groove-density grating and detected by the same CCD with a 20 second integration time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Transport measurements: Except for the data presented in Extended Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 6, the transport measurements were performed in the same measurement conditions (Opticool by Quantum Design) as the optical ones, enabling direct comparison between the observed phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The data shown in Figures 1-3 and 4b are taken using standard two terminal DC measurements with a Keithley 2450, while the rest of the data in Figure 4 are taken using AC detection with a DC offset voltage applied by a Zurich Instruments HF2 lock-in amplifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The current was amplified by a current preamplifier (DL Instruments;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Model 1211) with a sensitivity of 1 V/10−6 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' For the switching data used in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 4E- F and the stochasticity analysis, a time constant of 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='082 ms with a fourth-order filter was used, which was found to give the best time resolution while maintaining a high signal to noise ratio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The current was amplified by a current preamplifier (DL Instruments;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Model 1211) with a sensitivity of 1 V/10−6 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The 6L device in Extended Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 6 was measured in a PPMS DynaCool cryostat by Quantum Design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The data in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' S6a-c were taken using the same AC detection scheme, but with an SR860 lock-in amplifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The switching data in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' S6d-e were obtained using a constant current measurement scheme, which was achieved by putting a 100 MΩ resistor in series with the device.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The resistance signal was then pre-amplified by the differential-ended mode of SR560 with 20 times amplification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' References: 1 Baibich, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Giant Magnetoresistance of (001)Fe/(001)Cr Magnetic Superlattices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Physical Review Letters 61, 2472-2475 (1988).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 2 Binasch, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', Grünberg, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', Saurenbach, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' & Zinn, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Enhanced magnetoresistance in layered magnetic structures with antiferromagnetic interlayer exchange.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Physical Review B 39, 4828-4830 (1989).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 3 Žutić, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', Fabian, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' & Das Sarma, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Spintronics: Fundamentals and applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Reviews of Modern Physics 76, 323-410 (2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 4 Dieny, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Giant magnetoresistive in soft ferromagnetic multilayers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Physical Review B 43, 1297-1300 (1991).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 5 Julliere, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Tunneling between ferromagnetic films.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Physics letters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 54, 225 (1975).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 6 Moodera, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', Kinder, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', Wong, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' & Meservey, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Large Magnetoresistance at Room Temperature in Ferromagnetic Thin Film Tunnel Junctions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Physical Review Letters 74, 3273-3276 (1995).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 7 Miyazaki, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', Tezuka, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Giant magnetic tunneling effect in Fe/Al2O3/Fe junction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Journal of magnetism and magnetic materials 139, L231 (1995).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 8 Yuasa, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', Nagahama, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', Fukushima, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', Suzuki, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' & Ando, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Giant room- temperature magnetoresistance in single-crystal Fe/MgO/Fe magnetic tunnel junctions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nature Materials 3, 868-871 (2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 9 Parkin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Giant tunnelling magnetoresistance at room temperature with MgO (100) tunnel barriers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nature Materials 3, 862-867 (2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 10 Ikeda, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Tunnel magnetoresistance of 604% at 300K by suppression of Ta diffusion in CoFeB∕MgO∕CoFeB pseudo-spin-valves annealed at high temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Applied Physics Letters 93, 082508 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 11 Borders, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Integer factorization using stochastic magnetic tunnel junctions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nature 573, 390-393 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 12 Safranski, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Demonstration of Nanosecond Operation in Stochastic Magnetic Tunnel Junctions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nano Letters 21, 2040-2045 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 13 Bapna, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Magnetostatic effects on switching in small magnetic tunnel junctions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Applied Physics Letters 108, 022406 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 14 Mizrahi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Neural-like computing with populations of superparamagnetic basis functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nature Communications 9 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 15 Rippard, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', Heindl, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', Pufall, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', Russek, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' & Kos, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Thermal relaxation rates of magnetic nanoparticles in the presence of magnetic fields and spin-transfer effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Physical Review B 84 (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 16 Camsari, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', Sutton, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' & Datta, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' p-bits for probabilistic spin logic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Applied Physics Reviews 6, 011305 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 17 Bhatti, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Spintronics based random access memory: a review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Materials Today 20, 530-548 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 18 Cenker, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Reversible strain-induced magnetic phase transition in a van der Waals magnet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nature Nanotechnology 17, 256-261 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 19 Song, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Giant tunneling magnetoresistance in spin-filter van der Waals heterostructures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Science 360, 1214-1218 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 20 Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Very large tunneling magnetoresistance in layered magnetic semiconductor CrI3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nature Communications 9 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 21 Kim, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' One Million Percent Tunnel Magnetoresistance in a Magnetic van der Waals Heterostructure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nano Letters 18, 4885-4890 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 22 Klein, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Probing magnetism in 2D van der Waals crystalline insulators via electron tunneling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Science 360, 1218-1222 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 23 Hicks, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', Barber, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', Edkins, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', Brodsky, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' & Mackenzie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Piezoelectric-based apparatus for strain tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Review of Scientific Instruments 85, 065003 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 24 Telford, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Layered Antiferromagnetism Induces Large Negative Magnetoresistance in the van der Waals Semiconductor CrSBr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Advanced Materials 32, 2003240 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 25 Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Magnetization dependent tunneling conductance of ferromagnetic barriers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nature Communications 12 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 26 Cai, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Atomically Thin CrCl3: An In-Plane Layered Antiferromagnetic Insulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nano Letters 19, 3993-3998 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 27 Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Determining the phase diagram of atomically thin layered antiferromagnet CrCl3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nature Nanotechnology 14, 1116-1122 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 28 Scheie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Spin Waves and Magnetic Exchange Hamiltonian in CrSBr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Advanced Science 9, 2202467 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 29 Lee, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Magnetic Order and Symmetry in the 2D Semiconductor CrSBr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 30 Wu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Ultrahigh Sensitive Piezotronic Strain Sensors Based on a ZnSnO3 Nanowire/Microwire.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' ACS Nano 6, 4369-4374 (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 31 Yan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Giant gauge factor of Van der Waals material based strain sensors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nature Communications 12 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 32 Meiklejohn, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' & Bean, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' New Magnetic Anisotropy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Physical Review 102, 1413- 1414 (1956).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 33 Lavrijsen, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Magnetic ratchet for three-dimensional spintronic memory and logic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nature 493, 647-650 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 34 Gingrich, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Controllable 0–π Josephson junctions containing a ferromagnetic spin valve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nature Physics 12, 564-567 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 35 Ai, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Van der Waals ferromagnetic Josephson junctions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nature Communications 12 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 36 Idzuchi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Unconventional supercurrent phase in Ising superconductor Josephson junction with atomically thin magnetic insulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nature Communications 12 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 37 Kang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' van der Waals π Josephson Junctions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nano Letters (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 38 Narita, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Field-free superconducting diode effect in noncentrosymmetric superconductor/ferromagnet multilayers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nature Nanotechnology (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 39 Ando, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Observation of superconducting diode effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nature 584, 373-376 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 40 Wu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The field-free Josephson diode in a van der Waals heterostructure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Nature 604, 653-656 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Acknowledgements: We thank Xuetao Ma and Yen-Cheng Kung for fabrication advice, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Adam, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Borders, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Mcclelland for proofreading the paper, and John Stroud and Heonjoon Park for their help during the initial stages of the project.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The strain controlled optical measurement is mainly supported by DE-SC0018171.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The strain-controlled tunneling experiment is mainly supported by Air Force Office of Scientific Research (AFOSR) Multidisciplinary University Research Initiative (MURI) program, grant no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' FA9550- 19-1-0390.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' CrSBr crystal synthesis is supported by the Center on Programmable Quantum Materials, an Energy Frontier Research Center funded by the U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), under award DE-SC0019443.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' DGC is supported by the Columbia MRSEC on Precision-Assembled Quantum Materials (PAQM) (DMR-2011738).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' XX acknowledges support from the State of Washington funded Clean Energy Institute and from the Boeing Distinguished Professorship in Physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' JC acknowledges the Graduate Fellowship from Clean Energy Institute funded by the State of Washington.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' ZL and JHC acknowledge the support of the David and Lucile Packard Foundation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' This research was supported by an appointment to the Intelligence Community Postdoctoral Research Fellowship Program at University of Washington, administered by Oak Ridge Institute for Science and Education through an interagency agreement between the U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Department of Energy and the Office of the Director of National Intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Author contributions: XX and John C conceived the project.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' John C performed the optical and transport measurements with help from Jiaqi C and GD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' DO supervised transport measurements and contributed to fabrication development.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' John C fabricated the samples with assistance from HY and ZL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' John C, DO, TC, JHC, DX, and XX analyzed the data and interpreted the results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' TC, MWD and DX provided theoretical support.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' DGC grew the CrSBr crystals with supervision from XR and XYZ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' John C and XX wrote the manuscript with input from all authors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' All authors discussed the results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Competing interests: John C and XX have applied for a patent based on this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Data availability: The datasets generated during and/or analyzed during this study are available from the corresponding author upon reasonable request.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Figures: Figure 1 | Straintronic van der Waals magnetic tunnel junction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' a, Schematic of the magnetic state evolution of the CrSBr tunnel barrier with the application of either magnetic fields along the easy b axis or in-plane uniaxial strain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The changing magnetic configuration creates different resistance states when bias is applied between the graphite contacts (grey).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The red and blue arrows denote the spin direction within each layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' b, Schematic of straintronic MTJ consisting of graphite contacts sandwiching a CrSBr tunnel barrier (blue).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The whole device is fixed by gold clamps to a flexible polyimide substrate (purple) which is then strained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' c, Magnetic field dependence of a MTJ using an ≈ 11 nm CrSBr tunnel barrier (optical image inset, scale bar 3 µm) at a temperature of 60K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The device is slightly strained but remains in the AFM state at zero magnetic field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Magnetic field is applied along the hard c axis, leading to spin canting (inset arrows).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Resistance .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='H YYY KKK 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='2 0 2 1 0 1 2 Low resistance μ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='H (T)V = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 V NN Bias T = 60 K 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='8 V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' = 5 V C pa b High resistance Top Gr CrSBr Bot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Gr Figure 2 | Strain switchable magnetic tunnel junctions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' a, Magnetoresistance sweeps at select piezo voltages with a fixed bias voltage across the MTJ of VB = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The sweeps are offset for clarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' b, Full strain-dependent tunneling magnetoresistance with the magnetic field swept from positive to negative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' c, Strain dependent photoluminescence intensity plot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The beam spot was kept fixed on the junction region while the strain was continuously swept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' d-e, Bias dependent tunneling current with magnetic fields of 0 T (blue) and 3 T (green) applied in the low strain (d) and high strain (e) states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The magnetic state for each curve is depicted in the inset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' All measurements were performed at a temperature of 60 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Resista 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='38 5 V (eV) 1 Energy ( (nA) OV 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='34 0 5V 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='2 Photon 15 V PL (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=') 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='3 V,= 25 V 25 V 600 2000 0 2 1 0 1 2 0 5 10 15 20 25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='2 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4 μ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='H (T) Piezo Voltage (V) Bias Voltage (V)a b d 2 OT 3 T 2 1 ↑个个 E (nA) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='0 0 0 (GΩ) 2 1 R (GΩ) V,= 5 V 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='026 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='89 ince 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='6 c e Figure 3 | Temperature dependent zero-field tunneling resistance switching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' a, Tunneling resistance as a function of piezo voltage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' A large TMR change of ≈ 2700 % is observed between the low and high strain states at 60 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The change in magnetic state from AFM to FM interlayer coupling is depicted by the inset spin diagram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' b, Piezo-voltage-dependent tunneling resistance at select temperatures from 30 K to 149 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' v, Temperature dependence of the tunneling magnetoresistance ratio, defined as TMR (%) = ������ �� � 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' d, Magnetic-field dependent tunneling resistance at 155 K in the low strain (blue) and high strain (red) states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' a 10 d 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='8 4 (×105 ( VBias = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='02 V R T = 155 K T= 139 K 6 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='05 V R 2 5 V a 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='6 (×105 ( R 4 T = 149 K 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 V VBias = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='03 V 2 0 R 3 5 0 10 15 20 10 0 10 20 2 1 0 1 2 Piezo Voltage (V) μ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='H (T) Piezo Voltage (V)a 10 b C 104 (×109 Q2) T= 30 K T= 60 K (%) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='7 V μ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='H= O T Ratio 8 R 103 0 =0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5V TMR a (×107 ( 3 6 T = 85 K 102 108 Q2) V = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='35 V 20 60 100 140 R Temperature (K) Figure 4 | Strain control of multiple stable and stochastic layer-dependent magnetic domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' a, Tunneling current over time as strain pulses of increasing amplitude are applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The inset shows the measurement scheme: a small pulse of amplitude VPAC is applied on top of a static piezo voltage VPDC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The system is initialized by slowly increasing VPDC until the magnetic phase transition starts to occur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' As the pulse amplitude increases, the current switching stabilizes into discrete states (denoted by the purple arrows).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Additionally, the resting current, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' ground state, can be changed by a sufficiently large pulse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' b, Tunneling current over time as the static piezo voltage, VPDC is increased (red arrow) and then decreased (blue arrow) by .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='014 V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' No strain pulse is applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Bottom: Finer time resolution data of the domain fluctuations observed in the top panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' c, Schematic of strain tuning between magnetic domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' A sufficiently high pulse, VPAC, will flip between AFM and FM domains (left).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The fine adjustment of the static strain lowers the energy difference between AFM and FM domains, creating a metastable state with stochastic domain switching (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' d, Bias dependence of the switching rate in the metastable state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The piezo voltage is kept constant during the measurement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Data from panels A-D are taken at 60 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' e, Response function of a sensitive magnetic domain as a function of static piezo voltage at a temperature of 85 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' A value of either 0 or 1 indicates a stable domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' f, Tunneling current (top) and converted binary sequence (bottom) over time when the response function is near 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5, indicating equal amount of fluctuations between the parallel and antiparallel configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' g, P-values returned by the NIST random number test suite applied to the binary sequence from f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The black dashed line indicates a p-value of .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='01, the threshold for passing the specific test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The sampling time was .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='1760 seconds (see Supplementary Information).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Time (sec) Time (sec) c e 9 V PAC Value 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='1 p P 8 T = 85 K 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='01 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='04 V PDC Stable AV (V) Stochastic PDC TestTime (sec) b d 48 3 It (nA) 4 (nA) 2 46 Current ( 500 1500 2500 3500 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='6 Switching 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5m Binary 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='62 25 0 20 40 60 80 100a V PDC 。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='= 250 mV 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4F (nA) PAC 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='2 Current ( 9400 9500 9600 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='=5mV PAC 6 2000 6000 10000 14000 18000 22000 1 Extended Data for Strain-programmable van der Waals magnetic tunnel junctions Authors: John Cenker1, Dmitry Ovchinnikov1, Harvey Yang1, Daniel G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Chica2, Catherine Zhu1, Jiaqi Cai1, Geoffrey Diederich1,3, Zhaoyu Liu1, Xiaoyang Zhu2, Xavier Roy2, Ting Cao4, Matthew W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Daniels5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Jiun-Haw Chu1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Di Xiao4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Xiaodong Xu1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='* 1 Department of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' University of Washington,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Seattle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Washington 98195,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' USA 2 Department of Chemistry,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Columbia University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' New York,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' NY 10027 USA 3 Intelligence Community Postdoctoral Research Fellowship Program,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' University of Washington,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Seattle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' WA,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' USA 4 Department of Materials Science and Engineering,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' University of Washington,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Seattle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Washington 98195,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' USA 5 Physical Measurement Laboratory,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' National Institute of Standards and Technology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Gaithersburg,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' MD,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 20899,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' USA Correspondence to: xuxd@uw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='edu 2 Extended Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 1 | Calibration of strain through Raman spectroscopy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' a, Raman scattering from the P3 phonon taken on the tunnel junction region at a piezo voltage of 0 V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' A linear background originating from the polyimide photoluminescence is subtracted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The narrow linewidth indicates a homogenous strain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' b, Raman intensity plot as a function of piezo voltage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The beamspot is kept on the junction as the piezo voltage is continually increased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' c, Measured strain as a function of the applied voltage to the strain cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The strain is calculated by fitting the data from b with Lorentzian fits and then comparing the peak position to the unstrained value of 346 cm-1 using a strain shift rate of 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='2 cm-1/% as reported in previous studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' We found that there was a built-in strain of ~ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='9 % at the lowest piezo voltage used at this temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Intens V Rama S 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='1 336 0 Intensity 0 300 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='9 332 320 340 360 5 0 5 10 15 2025 5 0 5 1015 25 Raman Shift (cm 1) Piezo Voltage (V) Piezo Voltage (V)a b c 300 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='7 348 Shift (cm 1) (counts) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 344 200 (%) train 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='3 340 3 Extended Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 2 | Magnetoresistance sweeps at select piezo voltages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' a-d, Magnetoresistance sweeps as the field is swept down (blue) and up (black) at select piezo voltages through the strain-induced layered magnetization flipping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' At low strain (a), large negative magnetoresistance is observed, consistent with AFM order, while small positive magnetoresistance is observed in the high strain induced FM state (d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' In between, complex and hysteretic magnetic domain behavior is observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='8 V = 10V = 25 V b d Resistance (× 107 ohm) Resistance (× 107 ohm) 6 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='7 5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='6 4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='41 2 1 0 2 2 1 0 μ。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='H(T) μ。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='H(T)8 a /=0V 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 = 15V Resistance (× 108 ohm) 6 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4 4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='3 μ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='H +μ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='H 2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='2 0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='1 4 Extended Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 3 | Magneto-photoluminescence mapping of magnetic domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' a-b, Comparison of tunneling magnetoresistance (a) and integrated intensity from magneto- photoluminescence (PL) (b) measurements at the same piezo voltage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The correlation of the curves highlights the connection of the interlayer magnetic coupling to both electronic tunneling and exciton luminescence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' c, Optical image of the device with different spots labeled by different colors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' d-g, Magneto-PL sweeps at each of the spots labeled in (c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The similarities between spots separated by several microns indicates the presence of vertical, rather than lateral, magnetic domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 Energy (eV) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4 g Energy (eV) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4 μ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='H (T) 1000 1030 c PL (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=') PL (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=') 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='3 Energy (eV) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4 Energy (eV) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4 600 600 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='3 1 0 1 1 0 1 μH (T) μ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='H (T)Energy (eV) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4 f Energy (eV) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4 6 TMR 5 910 1000 4 PL PL (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=') 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='3 (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=') 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='3 R 3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4 Energy (eV) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4 8 Energy (eV) b (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=') PL 7 600 600 Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' intensity 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='3 9 1 0 1 1 0 μ,H (T) μ。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='H (T) 5 Extended Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 4 | Strain pulse data in the purely FM and AFM states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' a, Strain pulse amplitude dependence in the purely FM state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' As VPAC is increased from 0 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 V, a continuous change in the current is observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The calculated gauge factor is ~ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' b, Change in tunneling current over time as a strain pulse of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 V is applied in the AFM state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Due to the very large resistance, the effect of pulses with smaller amplitude cannot be resolved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' A gauge factor of ~ 30 is calculated, but with a large uncertainty due to the high resistance in the AFM state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' No changes to the static current are observed in either FM or AFM states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 V Current (nA) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='2 AFM State (Vppc = -4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 V) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='1 9800 9900 10000 10100 Time (sec)a =OV : 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='V 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='2 Current (nA) 75 FM State (Vppc = 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 V) 1000 3000 5000 b 6 Extended Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 5 | Bias-polarity-dependent stochastic switching indicates asymmetric vertical magnetic domain structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' a-b, Tunneling current over time of a metastable domain with a positive (a) and negative (b) bias applied to the MTJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Despite a similar magnitude of current, no switching is observed under negative bias, ruling out heating effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Instead, the data is consistent with an asymmetric vertical domain structure, as illustrated in (c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' A plausible scenario is that when a positive voltage is applied, the FM layers polarize the tunneling electrons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' These spin polarized electrons apply a spin-transfer torque like effect to the AFM layers, enhancing the stochastic switching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' On the other hand, when a negative bias is applied, the electrons are not highly polarized and do not exert a spin-transfer torque on the FM layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Current (nA) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='8 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='74 620.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 mV Bias 0 100 200 300 Time (sec)a Current (nA) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4 = 620.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 mV 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='3 Bias 6 7 Extended Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 6 | Strain switching in a six-layer MTJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' a, Magnetoresistance sweeps of a MTJ with a six-layer CrSBr tunnel barrier as the piezo voltage Vp is increased from 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 V to 75 V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The domain behavior at piezo voltages between the low-strain AFM (32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 V) and high-strain FM (75 V) states is much simpler than the ~ 16-layer device presented in the main text, providing additional evidence that vertical, layer-dependent domains are the origin of the complex hysteretic domains behavior during the magnetic phase transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The magnetic field is applied along the a axis at a temperature of 20 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' b-c, Magnetoresistance sweeps in the low strain AFM (b) and high strain FM (c) states, showing the characteristic switching from negative to positive MR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The optical image of the device is shown inset in b (scale bar 5 µm).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' d-e, Resistance over time at select piezo voltages during the magnetic phase transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Stochastic domain switching (d) which can be stabilized by slightly increasing strain e) are observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' These results highlight the potential for extending the strain-programmable vdW MTJs to the 2D limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Resistal (MQ 64 V,= 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='3 V Resistance (M) Resistance ( 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='68 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 V 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='56 R 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5V 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='64 0 0 7880 7960 8040 μ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='H(T) μ。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='H(T) Time (sec)a b d 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='73 75 V = 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 V (MQ) V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' = 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='9 V Resistance ( 4 Resistance 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='69 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='45 (M2) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='65 + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='35 3 nce 1 0 1 3280 3360 3440 c e 1 Supplementary information for Strain-programmable van der Waals magnetic tunnel junctions Authors: John Cenker1, Dmitry Ovchinnikov1, Harvey Yang1, Daniel G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Chica2, Catherine Zhu1, Jiaqi Cai1, Geoffrey Diederich1,3, Zhaoyu Liu1, Xiaoyang Zhu2, Xavier Roy2, Ting Cao4, Matthew W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Daniels5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Jiun-Haw Chu1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Di Xiao4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Xiaodong Xu1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='* 1 Department of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' University of Washington,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Seattle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Washington 98195,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' USA 2 Department of Chemistry,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Columbia University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' New York,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' NY 10027 USA 3 Intelligence Community Postdoctoral Research Fellowship Program,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' University of Washington,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Seattle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' WA,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' USA 4 Department of Materials Science and Engineering,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' University of Washington,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Seattle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Washington 98195,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' USA 5 Physical Measurement Laboratory,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' National Institute of Standards and Technology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Gaithersburg,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' MD,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 20899,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' USA Correspondence to: xuxd@uw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='edu 2 Supplementary Text: Additional stochasticity analysis of switching data taken near ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 Since the tunneling current is sampled much faster than the switching rate (~ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='14 sec), switching data collected over 200 seconds was downsampled and tested using 15 tests from the NIST test suite1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Maurer’s Universal Test was excluded since the binary sequence was not long enough.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The full sampling time dependence is shown below, using a standard threshold p-value of .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The grey line indicates the sequence passed all of the 15 considered tests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The red line indicates the average domain switching time obtained by dividing the total number of switches by the total time window.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' In addition to the NIST test suite, we analyzed the dwell time, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' the time between switches, of the 0 and 1 states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' The extracted dwell times are plotted as a histogram for the 0 and 1 states, following an exponential envelope as expected for a Poisson process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' We then plot the logarithm of the histogram bin counts (N) versus the dwell time: 80 80 40 40 0 0 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='8 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='8 Time (sec) Time (sec)Zero State Dwell Time One State Dwell Time 200 200 160 160 120 120 untsTests 5 0 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='2 Sampling time (sec)15 10 Passed 3 From the linear fits, we find that the characteristic lifetime, τ, of the 0 and 1 states are τ0 = 159 ± 9 ms and τ1 = 151 ± 9 ms, respectively, where the uncertainty is determined by the standard deviation of the linear fit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Based on this analysis and the NIST test suite results, we conclude that the strained MTJ can generate binary sequences with a high degree of randomness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' 9 2 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='5 Time (sec) Time (sec)Zero State One State 6 5 4 3 g(N) 4 References: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=' Ang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', Chuchill, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content=', NIST Test Suite, GitHub Repository, https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} +page_content='com/stevenang/randomness_testsuite (2017)' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE2T4oBgHgl3EQfPgY3/content/2301.03759v1.pdf'} diff --git a/8dAzT4oBgHgl3EQfgfzo/vector_store/index.pkl b/8dAzT4oBgHgl3EQfgfzo/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..a28f0cfd64ae6248262c15ed7fdc4bb01c665f8b --- /dev/null +++ b/8dAzT4oBgHgl3EQfgfzo/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:875e25fa1e785b5707e936425d9f8232958a7f9fa738cdbb9d89dbc711282c8d +size 106544 diff --git a/8tFRT4oBgHgl3EQfpzcC/content/2301.13614v1.pdf b/8tFRT4oBgHgl3EQfpzcC/content/2301.13614v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d2257a879ea277f264e6fc8928e4f04974395a3e --- /dev/null +++ b/8tFRT4oBgHgl3EQfpzcC/content/2301.13614v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5041e4cceb98e7c299f439c6214fd71d2cb6bb53c8801d392958f71f8880f6d7 +size 2268832 diff --git a/8tFRT4oBgHgl3EQfpzcC/vector_store/index.faiss b/8tFRT4oBgHgl3EQfpzcC/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..3a3d6dcc40c274fc308970b224cf68c61ca4d132 --- /dev/null +++ b/8tFRT4oBgHgl3EQfpzcC/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0238f3291ec63c05f18305eabf803d35d46f36effa5e2a4da55dce85b7622c9 +size 2490413 diff --git a/9dA0T4oBgHgl3EQfO_9E/content/tmp_files/2301.02168v1.pdf.txt b/9dA0T4oBgHgl3EQfO_9E/content/tmp_files/2301.02168v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..3383390acb3d4c12faacc9c5e519d889799d7a33 --- /dev/null +++ b/9dA0T4oBgHgl3EQfO_9E/content/tmp_files/2301.02168v1.pdf.txt @@ -0,0 +1,3731 @@ +On the Approximation Accuracy of +Gaussian Variational Inference +Anya Katsevich +akatsevi@mit.edu +Philippe Rigollet +rigollet@math.mit.edu +January 6, 2023 +Abstract +The main quantities of interest in Bayesian inference are arguably the first two moments of the +posterior distribution. In the past decades, variational inference (VI) has emerged as a tractable approach +to approximate these summary statistics, and a viable alternative to the more established paradigm of +Markov Chain Monte Carlo. However, little is known about the approximation accuracy of VI. In this +work, we bound the mean and covariance approximation error of Gaussian VI in terms of dimension +and sample size. Our results indicate that Gaussian VI outperforms significantly the classical Gaussian +approximation obtained from the ubiquitous Laplace method. Our error analysis relies on a Hermite +series expansion of the log posterior whose first terms are precisely cancelled out by the first order +optimality conditions associated to the Gaussian VI optimization problem. +1 +Introduction +A central challenge in Bayesian inference is to sample from, or compute summary statistics of, a posterior +distribution π on Rd. The classical approach to sampling is Markov Chain Monte Carlo (MCMC), in which +a Markov chain designed to converge to π is simulated for sufficiently long time. However, MCMC can +be expensive, and it is notoriously difficult to identify clear-cut stopping criteria for the algorithm [CC96]. +Besides, if one is only interested in summary statistics of π such as the mean and covariance, then generating +samples from π may not be the most efficient way to achieve this goal. An alternative, often computationally +cheaper, approach is variational inference (VI) [BKM17]. The idea of VI is to find, among all measures in +a certain parameterized family P, the closest measure to π. While various measures of proximity have been +proposed since the introduction of VI [DD21, DDP21], we employ here KL divergence, which is, by far, +the most common choice. Typically, statistics of interest, chiefly its first two moments, for measures in the +family P are either readily available or else easily computable. In this work, we consider the family of normal +distributions, which are directly parameterized by their mean and covariance. We define +ˆπ = N( ˆm, ˆS) ∈ argmin +p∈PGauss +KL( p ∥ π), +(1.1) +and take ˆm, ˆS as our estimates of the true mean mπ and covariance Sπ of π. PGauss denotes the family of +non-degenerate Gaussian distributions on Rd. +A key difference between MCMC and VI is that unbiased MCMC algorithms yield arbitrarily accurate +samples from π if they are run for long enough. On the other hand, the output of a perfect VI algorithm is +ˆπ, which is itself only an approximation to π. Therefore, a fundamental question in VI is to understand the +quality of the approximation ˆπ ≈ π, particularly in terms of the statistics of interest. In this work, we bound +the mean and covariance estimation errors ∥ ˆm − mπ∥ and ∥ ˆS − Sπ∥ for the Gaussian VI estimate (1.1). +Of course, we cannot expect an arbitrary, potentially multimodal π to be well-approximated by a Gaussian +distribution. In the setting of Bayesian inference, however, the Bernstein-von Mises theorem guarantees that +under certain regularity conditions, a posterior distribution converges to a Gaussian density in the limit of +large sample size [VdV00, Chapter 10]. To understand why this is the case, consider a generic posterior +1 +arXiv:2301.02168v1 [math.ST] 5 Jan 2023 + +π = πn with density of the form +πn(θ | x1:n) ∝ ν(θ) +n +� +i=1 +pθ(xi) +(1.2) +Here, ν is the prior, pθ is the model density, and x1:n = x1, . . . , xn are i.i.d observations. Provided ν and pθ +are everywhere positive, we can write πn as +πn(θ) ∝ e−nvn(θ), +vn(θ) := − 1 +n +n +� +i=1 +log pθ(xi) − 1 +n log ν(θ). +If n is large and vn has a strict global minimum at θ = m∗, then πn will place most of its mass in a +neighborhood of m∗. In other words, πn is effectively unimodal, and hence a Gaussian approximation is +reasonable in this case. This reasoning drives a second, so-called Laplace approximation to πn, which is a +Gaussian centered at m∗. Hence, the mode m∗ can also serve as an approximation to the true mean mπn. +However, as we discuss below, Gaussian VI yields a more accurate estimate of mπn. +Main contributions. +Our main result quantifies the mean and covariance estimation errors of Gaussian +VI for a target measure πn ∝ e−nvn, in terms of sample size n and dimension d. In line with the above +reasoning, the key assumption is that vn has a unique global minimizer. +It is useful at this point to think of vn as being a quantity of order 1, and for the purpose of readability, +we write simply vn = v in the rest of this introduction. It is easy to see that πn ∝ e−nv has variance of +order 1/n. To account for this vanishing variance, we rescale the approximation errors appropriately in the +statement of the following theorem. +Theorem. Let πn ∝ exp(−nv) have mean and covariance mπn, Sπn respectively. Assume that d3 ≤ n and +that v ∈ C4(Rd) has a unique strict minimum at m∗. If ∥∇3v∥ and ∥∇4v∥ grow at most polynomially away +from m∗, and if v grows at least logarithmically away from m∗, then the mean and covariance ˆmn, ˆSn of the +variational Gaussian approximation (1.1) to π satisfy +√n∥ ˆmn − mπn∥ ≲ +�d3 +n +�3/2 +n∥ ˆSn − Sπn∥ ≲ d3 +n , +(1.3) +Here, ≲ means the inequalities hold up to an absolute (d, n-independent) constant, as well as a factor +depending on second and third order derivatives of v in a neighborhood of the mode m∗. This v-dependent +factor is made explicit in Section 2. +The theorem shows that both the mean and covariance VI estimates, and especially the mean estimate +ˆmn, are remarkably accurate approximations to the true mean and covariance. As such, it is a compelling +endorsement of Gaussian VI for estimating the posterior mean and covariance in the finite sample regime. +Although the condition n ≥ d3 is restrictive when d is very large, we believe that it is unavoidable without +further assumptions and note that it also appears in existing bounds for the Laplace method [Spo22]. +As mentioned above, the Laplace method is a competing Gaussian approximation to πn that is widespread +in practice for its computational simplicity. We use it as a benchmark to put the above error bounds into +context. The Laplace approximation to πn ∝ e−nv is given by +πn ≈ N(m∗, (n∇2v(m∗))−1), +where m∗ is the global minimizer of v. This approximation simply replaces v by its second order Taylor +expansion around m∗. The recent works [Spo22] and [KGB22] derive error bounds for the Laplace approxima- +tion. Spokoiny [Spo22] shows that √n∥m∗ − mπn∥ ≲ (d3/n)1/2 assuming v is strongly convex, and [KGB22] +similarly shows that √n∥m∗ − mπn∥ ≲ 1/√n with implicit dependence on d, under weaker assumptions. +For the covariance approximation, an explicit error bound is stated only in [KGB22]; the authors show that +n∥(n∇2v(m∗))−1 − Sπn∥ ≲ 1/√n. Meanwhile, Spokoiny states lemmas in the appendix from which one can +derive a d-dependent covariance error bound. +In a companion paper [Kat23], we extend the techniques developed in the present work to obtain the +following tighter n dependence of the Laplace covariance error: +n∥(n∇2v(m∗))−1 − Sπn∥ ≲ 1/n. +(1.4) +2 + +This n dependence can also be obtained using the approach in [Spo22]. +Let us summarize the n-dependence of these bounds, incorporating the 1/√n and 1/n scaling of the +mean and covariance errors. The Gaussian VI mean approximation error is n−1/2 × n−3/2, which is a factor +of n−1 more accurate than the Laplace mean error of n−1/2 × n−1/2. The covariance approximation error +is the same for both methods (using the tighter covariance bound (1.4)): n−1 × n−1. VI’s improved mean +approximation accuracy is confirmed in our simulations of a simple Bayesian logistic regression example in +d = 2; see Figure 1, and Section 2.3 for more details. +Figure 1: Gaussian VI yields a more accurate mean estimate than does Laplace, while the two covariance +estimates are on the same order. +Here, πn is the likelihood of logistic regression given n observations +in dimension d = 2. +For the left-hand plot, the slopes of the best-fit lines are −1.04 for the Laplace +approximation and −2.02 for Gaussian VI. For covariance: the slopes of the best-fit lines are -2.09 for +Laplace, -2.12 for VI. +We note that the Laplace approximation error bounds in the companion work [Kat23] are also tighter in +their dimension dependence. +First-order optimality conditions and Hermite series expansions. +The improvement of Gaussian +VI over the Laplace method to estimate the mean a posteriori rests on a remarkable interaction between +first-order optimality conditions and a Hermite series expansion of the potential v. +Hereafter, we replace θ by x and let V = nv, π ∝ e−V . The focal point of this work are the first order +optimality equations for the minimization (1.1): +∇m,SKL( N(m, S) ∥ π) +�� +(m,S)=( ˆm, ˆS) = 0. +This is also equivalent to setting the Bures-Wasserstein gradient of KL( p ∥ π) to zero at p = N( ˆm, ˆS) as +in [LCB+22]. Explicitly, we obtain that (m, S) = ( ˆm, ˆS) is a solution to +E [∇V (m + S1/2Z)] = 0, +E [∇2V (m + S1/2Z)] = S−1, +(EV ) +where Z ∼ N(0, Id) and S1/2 is the positive definite symmetric square root of S; see [LCB+22] for this +calculation. In some sense, the fact that N( ˆm, ˆS) minimizes the KL divergence to π does not explain why +ˆm is such an accurate estimate of mπ. Rather, the true reason has to do with properties of solutions to the +fixed point equations (EV ). +To see why, consider the function ¯V (x) = V ( ˆm + ˆS1/2x). If π ∝ e−V is close to the density of N( ˆm, ˆS), +then ¯π ∝ e− ¯V should be close to the density of N(0, Id). In other words, we should have that ¯V (x) ≈ +const. + ∥x∥2/2. This is ensured by the first order optimality equations (EV ). Indeed, note that (EV ) can +be written in terms of ¯V as +E [∇ ¯V (Z)] = 0, +E [∇2 ¯V (Z)] = Id. +(1.5) +3 + +Mean approx. error m - mπ +( +10-2 +10-3 +m = m* (Laplace) +m = mn (Gaussian VI) +10- +10-6 +102 +103 +nCovariance approx. error IlS - Shm +- S = (n-v(m×))-1 (Laplace) +- S = Sn (Gaussian VI) +10-3 +10- +10-5 +102 +103 +nAs we explain in Section 3.4, the equations (1.5) set the first and second order coefficients in the Hermite +series expansion of ¯V to 0 and Id, respectively. As a result, ¯V (x) − ∥x∥2/2 = const. + r3(x), where r3 is a +Hermite series containing only third and higher order Hermite polynomials. The accuracy of the Gaussian +VI mean and covariance estimates stems from the fact that the Hermite remainder r3 is of order r3 ∼ 1/√n, +and the fact that r3 is orthogonal to linear and quadratic functions with respect to the Gaussian measure. +See Section 3.4 for a high-level summary of this Hermite series based error analysis. +Related Work. +The literature on VI can be roughly divided into statistical and algorithmic works. Works +on the statistical side have focused on the contraction of variational posteriors around a ground truth +parameter in the large n (sample size) regime. +(We use “variational posterior” as an abbreviation for +variational approximation to the posterior.) For example, [WB19] prove an analogue of the Bernstein-von +Mises theorem for the variational posterior, [ZG20] study the contraction rate of the variational posterior +around the ground truth in a nonparametric setting, and [AR20] study the contraction rate of variational +approximations to tempered posteriors, in high dimensions. +A key difference between these works and ours is that here, we determine how well the statistics of the +variational posterior match those of the posterior itself, rather than those of a limiting (n → ∞) distribution. +We are only aware of one other work studying the problem of quantifying posterior approximation accuracy. +In [HY19], the authors consider a Bayesian model with “local” latent variables (one per data point) and +global latent variables, and they study the mean field variational approximation, given by the product +measure closest to the true posterior in terms of KL divergence. They show that the the mean ˆm of their +approximation satisfies √n∥ ˆm − mπ∥ ≲ 1/n1/4. +Since the algorithmic side of VI is not our focus here, we simply refer the reader to the work [LCB+22] and +references therein. This work complements our analysis in that it provides rigorous convergence guarantees +for an algorithm that solves the optimization problem (1.1). +Organization of the paper. +The rest of the paper is organized as follows. In Section 2, we first redefine +( ˆm, ˆS) as a certain “canonical” solution to the first order optimality conditions (EV ). We then state our +assumptions and main result on the Gaussian VI mean and covariance approximation errors, and present +a numerical result confirming the n scaling of our bound. In Section 3, we give an overview of the proof, +and in Section 4 we flesh out the details. Section 5 outlines the proof of the existence and uniqueness of +the aforementioned “canonical” solution ( ˆm, ˆS) to (EV ). In the Appendix, we derive a multivariate Her- +mite series remainder formula and then prove a number of supplementary results omitted from the main text. +Notation. For two k-tensors T, Q ∈ (Rd)⊗k, we define +⟨T, Q⟩ = +d +� +i1,...,ik=1 +Ti1...ikQi1...ik, +and let ∥T∥F = ⟨T, T⟩1/2 be the Frobenius norm of T. We will more often make use of the operator norm +of T, denoted simply by ∥ · ∥: +∥T∥ = +sup +∥u1∥≤1,...,∥uk∥≤1 +⟨u1 ⊗ · · · ⊗ uk, T⟩, +(1.6) +where the supremum is over vectors u1, . . . , uk ∈ Rd. For positive scalars a, b, we write a ≲ b to denote +that a ≤ Cb for an absolute constant C (the only exception to this notation is (1.3) above, in which ≲ also +incorporated a v dependent factor). We let +mπ = E π[X], +Sπ = Covπ(X) = E π[(X − mπ)(X − mπ)T ]. +Finally, for a function V with a unique global minimizer m∗, we let HV denote ∇2V (m∗). +2 +Statement of Main Result +Throughout the rest of the paper, we write π ∝ e−nv. Note that v may depend on n in a mild fashion as is +often the case for Bayesian posteriors. We also define V = nv. +4 + +In light of the centrality of the fixed point equations (EV ), we begin the section by redefining ( ˆm, ˆS) as +solutions to (EV ) rather than minimizers of the KL divergence objective (1.1). These definitions diverge +only in the case that V is not strongly convex. Indeed, if V is strongly convex then KL( · ∥ π) is strongly +geodesically convex in the submanifold of normal distributions; see, e.g., [LCB+22]. Therefore, in this case, +there is a unique minimizer ˆπ of the KL divergence, corresponding to a unique solution ( ˆm, ˆS) ∈ Rd × Sd +++ +to (EV ). In general, however, if (m, S) solve (EV ) this does not guarantee that m is a good estimator of +mπ. To see this, consider the equations in the following form, recalling that v = V/n: +E [∇v(m + S1/2Z)] = 0, +S E [∇2v(m + S1/2Z)] = 1 +nId. +(2.1) +Let x ̸= m∗ be a critical point of v, that is, ∇v(x) = 0, and consider the pair (m, S) = (x, 0). For this (m, S) +we have +E [∇v(m + S1/2Z)] = ∇v(x) = 0, +S E [∇2v(m + S1/2Z)] = 0 ≈ 1 +nId. +Thus (x, 0) is an approximate solution to (2.1), and by continuity, we expect that there is an exact solution +nearby. In other words, to each critical point x of v is associated a solution (m, S) ≈ (x, 0) of (2.1). The +solution (m, S) of (2.1) which we are interested in, then, is the one near (m∗, 0). Lemma 1 below formalizes +this intuition; we show there is a unique solution (m, S) to (EV ) in the set +RV = +� +(m, S) ∈ Rd×Sd +++ : S ⪯ 2H−1, +∥ +√ +H +√ +S∥2 + ∥ +√ +H(m − m∗)∥2 ≤ 8 +� +, +(2.2) +where H = ∇2V (m∗) = n∇2v(m∗). +Note that due to the scaling of H with n, the set RV is a small +neighborhood of (m∗, 0). We call this unique solution (m, S) in RV the “canonical” solution of (EV ). +We expect the Gaussian distribution corresponding to this canonical solution to be the minimizer of (1.1), +although we have not proved this. Regardless of whether it is true, we will redefine ( ˆm, ˆS) to denote the +canonical solution. Indeed, whether or not N( ˆm, ˆS) actually minimizes the KL divergence or is only a local +minimizer is immaterial for the purpose of estimating mπ. +In the rest of this section, we state our assumptions on v, a lemma guaranteeing a canonical solution +ˆm, ˆS to (EV ), and our main results bounding the mean and covariance errors of the Laplace and Gaussian +VI approximations. +2.1 +Assumptions +Our main theorem rests on rather mild assumptions on the regularity of the potential v. +Assumption V0. The function v is at least C3 and has a unique global minimizer x = m∗. +Let α2 be a lower bound on λmin(∇2v(m∗)) and β2 be an upper bound on λmax(∇2v(m∗)). +Assumption V1. There exists r > 0 such that N := nr ≥ d3 and +√r +α2√α2 +sup +∥y∥≤1 +���∇3v +� +m∗ + +� +r/α2 y +���� ≤ 1 +2. +(2.3) +Note that the left-hand side of (2.3) is monotonically increasing with r. Indeed, changing variables to +z = +� +r/α2 y, we see that the supremum is taken over the domain {∥z∥ ≤ +� +r/α2}, which grows with r. +Furthermore, the left-hand side equals zero when r = 0. Therefore, this assumption states that we can +increase r from 0 up to a large multiple of d3/n, while keeping the left-hand side below 1/2. +Remark 2.1. Define β3 = 1 +2α2√α2/√r, so that r = 1 +4α3 +2/β2 +3. By Assumption V1, we have +sup +∥y∥≤1 +∥∇3v(m∗ + +� +r/α2 y)∥ ≤ β3. +5 + +Hence, we can also think of β3 as an upper bound on ∥∇3v∥. For future reference, we also define +C2,3 := 1/r = 4β2 +3 +α3 +2 +. +(2.4) +Assumption V2 (Polynomial growth of ∥∇kv∥, k = 3, 4). For some 0 < q ≲ 1 we have +√r +α2√α2 +���∇3v +� +m∗ + +� +r/α2 y +���� ≤ 1 + ∥y∥q, +∀y ∈ Rd. +(2.5) +Here, r is from Assumption V1. If v is C4, we additionally assume that +r +α2 +2 +���∇4v +� +m∗ + +� +r/α2 y +���� ≲ 1 + ∥y∥q, +∀y ∈ Rd +with the same q and r. +Note that Assumption V1 guarantees that (2.5) is satisfied inside the unit ball {∥y∥ ≤ 1}. Therefore, +(2.5) simply states that we can extend the constant bound 1/2 to a polynomial bound outside the unit ball. +Also, note that if (2.5) is satisfied for some q only up to a constant factor (i.e. ≲) in the region {∥y∥ ≥ 1} +then we can always increase q to ensure the inequality is satisfied exactly. +Assumption V3 (Growth of v away from the minimum). Let q be as in Assumption V2. Then +v (m∗ + x) ≥ d + 12q + 36 +n +log( +� +nβ2∥x∥), +∀∥x∥ ≥ +� +r/β2. +(2.6) +See Section 3 below for further explanation of the intuition behind and consequences of the above as- +sumptions. +2.2 +Main result +We are now ready to state our main results. First, we characterize the Gaussian VI parameters ( ˆmπ, ˆSπ): +Lemma 1. Let Assumptions V0, V1, V2 be satisfied and assume √nr/d ≥ 40 +√ +2( +√ +3 + +� +(2q)!), where r, q +are from Assumptions V1, V2, respectively. Define H = ∇2V (m∗) = n∇2v(m∗). Then there exists a unique +(m, S) = ( ˆmπ, ˆSπ) in the set +RV = +� +(m, S) ∈ Rd × Sd +++ : S ⪯ 2H−1, ∥ +√ +H +√ +S∥2 + ∥ +√ +H(m − m∗)∥2 ≤ 8 +� +which solves (EV ). Moreover, ˆSπ satisfies +2/3 +nβ2 +Id ⪯ ˆSπ ⪯ +2 +nα2 +Id. +(2.7) +We now state our bounds on the mean and covariance errors. For simplicity, we restrict ourselves to the +case v ∈ C4. See Theorem 1-W for results in the case v ∈ C3 \ C4. +Theorem 1 (Accuracy of Gaussian VI). Let Assumption V3 and the assumptions from Lemma 1 be satisfied, +and ˆmπ, ˆSπ be as in this lemma. Recall the definition of C2,3 from (2.4). If v ∈ C4, then +∥ ˆmπ − mπ∥ ≲ +1 +√nα2 +�C2,3d3 +n +�3/2 +∥ ˆSπ − Sπ∥ ≲ +1 +nα2 +C2,3d3 +n +. +(2.8) +In Section 3.3, we prove that Lemma 1 and Theorem 1 are a consequence of analogous statements for a +certain affine invariant density. See that subsection, and Section 3 more generally, for proof overviews. +6 + +2.3 +An example: Logistic Regression +As noted in the introduction, our results show that Gaussian VI yields very accurate mean and covariance +approximations; in fact, the mean estimate is a full factor of 1/n more accurate than the mean estimate given +by the Laplace approximation. Neither our bounds nor those on the Laplace error in [Spo22] and [KGB22] +are proven to be tight, but we will now confirm numerically that the bounds give the correct asymptotic +scalings with n for a logistic regression example. We also show how to check the assumptions for this example. +In logistic regression, we observe n covariates xi ∈ Rd and corresponding labels yi ∈ {0, 1}. The labels +are generated randomly from the covariates and a parameter z ∈ Rd via +p(yi | xi, z) = s(xT +i z)yi(1 − s(xT +i z))1−yi, +where s(a) = (1 + e−a)−1 is the sigmoid. In other words, yi ∼ Bern(s(xT +i z)). We take the ground truth z +to be z = e1 = (1, 0, . . . , 0), and we generate the xi, i = 1, . . . , n i.i.d. from N(0, λ2Id), so in particular the +covariates themselves do not depend on z. We take a flat prior, so that the posterior distribution of z is +simply the likelihood, π(z) = πn(z | x1:n) ∝ e−nv(z), where +v(z) = − 1 +n +n +� +i=1 +log p(yi | xi, z) += − 1 +n +n +� +i=1 +� +yi log s(xT +i z) + (1 − yi) log(1 − s(xT +i z)) +� +. +(2.9) +Numerical Simulation +For the numerical simulation displayed in Figure 1, we take d = 2 and n = 100, 200, . . . , 1000. For each n, we +draw ten sets of covariates xi, i = 1, . . . , n from N(0, λ2Id) with λ = +√ +5, yielding ten posterior distributions +πn(· | x1:n). We then compute the Laplace and VI mean and covariance approximation errors for each n +and each of the ten posteriors at a given n. The solid lines in Figure 1 depict the average approximation +errors over the ten distributions at each n. The shaded regions depict the spread of the middle six out of +ten approximation errors. See Appendix D for details about the simulation. +In the left panel of Figure 1 depicting the mean error, the slopes of the best fit lines are −1.04 and −2.02 +for Laplace and Gaussian VI, respectively. For the covariance error in the righthand panel, the slopes of the +best fit lines are −2.09 and −2.12 for Laplace and Gaussian VI. This confirms that our bounds, the mean +bound of [KGB22] and the bound (1.4) (also implied by results in [Spo22]) are tight in their n dependence. +Verification of Assumptions +It is well known that the likelihood (2.9) is convex, and has a finite global minimizer z = m∗ (the MLE) +provided the data xi, i = 1, . . . , n are not linearly separable. Assumption V0 is satisfied in this case. For +simplicity, we verify the remaining assumptions in the case that n is large enough that we can approximate +v by the population log likelihood v∞, whose global minimizer is m∗ = e1, the ground truth vector. Using +this approximation, we show in Appendix D that +α2 ≳ λ2s′(λ), +β2 ≤ λ2 +4 , +and +∥∇3v∞(z)∥ ≤ β3 := 2λ3, +∀z ∈ Rd. +(2.10) +To verify Assumption V1, we need to find r such that +sup +∥z−m∗∥≤√ +r/α2 +∥∇3v(z)∥ ≤ α3/2 +2 +2√r . +Using the uniform bound (2.10) on ∥∇3v∥, it suffices to take r = +α3 +2 +4β2 +3 , in which case +C2,3 = 1 +r = 4β2 +3 +α3 +2 +≲ +1 +s′(λ)3 ≲ (1 + cosh(λ))3, +(2.11) +7 + +using that s′(λ) = s(λ)(1 − s(λ)) = 1 +2(1 + cosh(λ))−1. Thus Assumption V1 is satisfied as long as n ≥ d3/r, +which is true provided n is larger than a constant multiple of (1 + cosh(λ))3. Next, we can use (2.10) and a +similar bound on ∥∇4v∥ (which is also bounded uniformly over Rd) to show that Assumption V2 is satisfied +with q = 0. It remains to check Assumption V3, which we do in Appendix D using the convexity of v. +Indeed, convexity immediately implies at least linear growth away from any point. We conclude that the +conditions of Theorem 1 are met. +3 +Proof Overview: Affine Invariant Rescaling and Hermite Ex- +pansion +In this section, we overview the proof of Theorem 1. We start in Section 3.1 by explaining the affine invariance +inherent to this problem. This motivates us to rescale V = nv to obtain a new affine invariant function W. +In Section 3.2, we state Assumptions W0-W3 on W, which include the definition of a scale-free parameter +N intrinsic to W. We then state our main results for W: Lemma 1-W and Theorem 1-W. In Section 3.3, +we deduce Lemma 1 and Theorem 1 for V from the lemma and theorem for W. We outline the proof of +Theorem 1-W in Section 3.4. The proof of Lemma 1-W is of a different flavor, and is postponed to Section 5. +3.1 +Affine Invariance +To prove Theorem 1, we will bound the quantities +∥ ˆS−1/2 +π +( ˆmπ − mπ)∥, +∥ ˆS−1/2 +π +Sπ ˆS−1/2 +π +− Id∥. +(3.1) +As shown in Section 3.3, combining the bounds on (3.1) with bounds on ∥ ˆSπ∥ will give the desired estimates +in Theorems 1. The reason for considering (3.1) rather than directly bounding the quantities in the theorems +is explained in Section 3.4. +In the following lemma, we show that the quantities (3.1) are affine invariant. We discuss the implications +of this fact at the end of the subsection. First, define +Definition 3.1. Let f be a C2 function with unique global minimizer m∗f, and let Hf = ∇2f(m∗f). Then +Rf = +� +(m, S) ∈ Rd×Sd +++ : S ⪯ 2H−1 +f , +∥ +√ +Hf +√ +S∥2 + ∥ +√ +Hf(m − m∗f)∥2 ≤ 8 +� +. +(3.2) +Lemma 3.1. Let V1, V2 ∈ C2(Rd), where V2(x) = V1(Ax + b) for some b ∈ Rd and invertible A ∈ Rd×d. Let +πi ∝ e−Vi, i = 1, 2. Then the pair ( ˆm1, ˆS1) is a unique solution to (EV1) in the set RV1 if and only if the +pair ( ˆm2, ˆS2) given by +ˆm2 = A−1( ˆm1 − b), +ˆS2 = A−1 ˆS1A−T +(3.3) +is a unique solution to (EV2) in the set RV2. Furthermore, +∥ ˆS−1/2 +2 +( ˆm2 − mπ2)∥ = ∥ ˆS−1/2 +1 +( ˆm1 − mπ1)∥, +∥ ˆS−1/2 +2 +Sπ2 ˆS−1/2 +2 +− Id∥ = ∥ ˆS−1/2 +1 +Sπ1 ˆS−1/2 +1 +− Id∥. +(3.4) +See Lemma C.1 of Appendix C for the proof of the first statement. The proof of (3.4) follows from (3.3), +the fact that +mπ2 = A−1(mπ1 − b), +Sπ2 = A−1Sπ1A−T , +(3.5) +and the following lemma +Lemma 3.2. Let C, D ∈ Sd +++ be symmetric positive definite matrices and x ∈ Rd. Then ∥C−1/2x∥ = +√ +xT C−1x and +∥C−1/2DC−1/2 − Id∥ = sup +u̸=0 +uT Du +uT Cu − 1. +This is a simple linear algebra result, so we omit the proof. +8 + +Discussion. +Lemma 3.1 shows that our bounds on the quantities (3.1) should themselves be affine invariant, i.e. the same +bounds should hold if we replace V = nv by any function in the set {V (A·+b) : A ∈ Rd×d invertible, b ∈ Rd}. +This motivates us to identify an affine-invariant large parameter N. It is clear that n itself cannot be the +correct parameter N because n is not well-defined: nv = (n/c)(cv) for any c > 0. Another natural candidate +for N, which removes this degree of freedom, is N = λmin(∇2V (m∗)). However, λmin(∇2V (m∗)) is not +affine-invariant. Indeed, replacing V (x) by V (cx), for example, changes λmin by a factor of c2. To obtain an +affine invariant bound, we will define N in Assumption W1 below as a parameter intrinsic to the function +W = W[V ] given by +W(x) = V (H−1/2 +V +x + m∗V ). +(3.6) +It is straightforward to show that for any other V2(x) = V (Ax + b), we have +W[V2](x) = V2(H−1/2 +V2 +x + m∗V2) = V (H−1/2 +V +x + m∗V ) = W[V ](x). +In other words, any function V2 in the set {V (A · +b) : A ∈ Rd×d invertible, b ∈ Rd} maps to the same, +affine invariant W. This function is the “correct” object of study, and any bounds we obtain must follow +from properties intrinsic to W. +3.2 +Assumptions and Results for W +In this section, we state assumptions on W, one of which identifies an appropriate affine invariant parameter +N intrinsic to W. +This parameter is such that as N increases, the measure ρ ∝ e−W is more closely +approximated by a Gaussian. We then state results on the existence and uniqueness of solutions ˆmρ, ˆSρ to +the first order optimality equations (EW ), and obtain bounds in terms of d and N on the quality of the VI +approximation to the mean and covariance of ρ. +Assumption W0. Let W be at least C3, with unique global minimizer x = 0, and ∇2W(0) = Id. Moreover, +assume without loss of generality that W(0) = 0. +Next, we identify N as a parameter quantifying the size of ∥∇3W∥ in a certain neighborhood of zero: +Assumption W1. There exists N ≥ d3 such that +√ +N sup +∥x∥≤1 +∥∇3W( +√ +Nx)∥ ≤ 1 +2. +(3.7) +This definition ensures that N scales proportionally to n. Indeed, suppose W1 is the affine invariant +function corresponding to the equivalence class containing n1v, and let W1 satisfy (3.7) with N = N1. +Then the affine invariant W2 corresponding to the equivalence class containing n2v is given by W2(x) = +n2 +n1 W1( +√n1 +√n2 x). From here it is straightforward to see that W2 satisfies (3.7) with N = N2, where N2/N1 = +n2/n1. To further understand the intuition behind this assumption, consider the following lemma. +Lemma 3.3. Let W satisfy Assumptions W0 and W1 and let C ≤ +� +N/d. Then +����W(x) − ∥x∥2 +2 +���� ≤ C3 +12 +d +√ +d +√ +N +, +∀∥x∥ ≤ C +√ +d, +W(x) ≥ ∥x∥2 +4 +, +∀∥x∥ ≤ +√ +N. +(3.8) +The lemma shows that N quantifies how close W is to a quadratic, and therefore how close ρ ∝ e−W is +to being Gaussian. +Proof. Taylor expanding W(x) to second order for ∥x∥ ≤ C +√ +d and using (3.7), we have +|W(x) − ∥x∥2/2| ≤ 1 +3! +sup +∥x∥≤C +√ +d +∥x∥3∥∇3W(x)∥ ≤ C3 +12 +d +√ +d +√ +N +. +(3.9) +9 + +The second inequality in (3.8) follows from the fact that ∇2W(x) ⪰ 1 +2Id for all ∥x∥ ≤ +√ +N, as we now show. +Taylor expanding ∇2W(x) to zeroth order, we get that +∥∇2W(x) − ∇2W(0)∥ ≤ +sup +∥x∥≤ +√ +N +∥x∥∥∇3W(x)∥ ≤ 1 +2. +Since ∇2W(0) = Id it follows that ∇2W(x) ⪰ 1 +2Id. +Assumption W2 (Polynomial growth of ∥∇kW∥, k = 3, 4). There exists 0 < q ≲ 1 such that +√ +N +���∇3W +�√ +Nx +���� ≤ 1 + ∥x∥q +∀x ∈ Rd. +(3.10) +If W is C4, then the following bound also holds with the same q: +N +���∇4W +�√ +Nx +���� ≲ 1 + ∥x∥q, +∀x ∈ Rd. +(3.11) +The N 1 in (3.11) is also chosen to respect the proportional scaling of N with n: if the affine invariant +W1 corresponding to n1v satisfies (3.11) with N = N1, then the affine invariant W2 corresponding to n2v +satisfies (3.11) with the same q and N = N2, where N2/N1 = n2/n1. +Note that Assumption W1 guarantees that (3.10) is satisfied inside the unit ball; therefore, (3.10) simply +states that we can extend the constant bound 1/2 to a polynomial bound outside of this ball. Also, note +that if the inequality is satisfied for some q only up to a constant factor (i.e. ≲) in the region {∥x∥ ≥ 1}, +then we can always increase q to ensure the inequality is satisfied exactly. +Assumption W2 implies that expectations of the form E [∥∇kW(Y )∥p], k = 3, 4, decay with N. Indeed, +we have +Lemma 3.4. Let p ≥ 0 and Y ∈ Rd be a random variable such that E [∥Y ∥pq] < ∞, where q is from +Assumption W2. Let k = 3 or 4, corresponding to the cases W ∈ C3 or W ∈ C4, respectively. Then +E [∥∇kW(Y )∥p] ≲ N −p( k +2 −1) � +1 + E +� +∥Y/ +√ +d∥pq�� +. +Proof. By Assumption W2, +E [∥∇kW(Y )∥p] ≲ N −p( k +2 −1)E +�� +1 + ∥Y/ +√ +N∥q�p� +≤ N −p( k +2 −1)E +�� +1 + ∥Y/ +√ +d∥q�p� +≲ N −p( k +2 −1) � +1 + E +� +∥Y/ +√ +d∥pq�� +. +(3.12) +In the second line we used that d ≤ N. +If E [∥Y/ +√ +d∥pq] is d-independent, as for Gaussian random variables, then the above bound reduces to +E [∥∇kW(Y )∥p] ≲ N −p( k +2 −1). +Assumption W3 (Separation from Zero; Growth at Infinity). We have +W(x) ≥ (d + 12q + 36) log ∥x∥, +∀∥x∥ ≥ +√ +N, +where q is from Assumption W2. +Remark 3.1. For consistency with the previous assumptions, let us also reformulate this one in terms of +W( +√ +Nx): +W( +√ +Nx) ≥ (d + 12q + 36) log +√ +N + (d + 12q + 36) log ∥x∥, +∀∥x∥ ≥ 1. +(3.13) +Recall that inside the unit ball, W( +√ +Nx) is no less than N∥x∥2/4, by Lemma 3.3. Therefore, the value of +W( +√ +Nx) increases up to at least N/4 as x approaches unit norm. We can interpret (3.13) as saying that +outside the unit ball, we must maintain constant separation of order d log N from zero, and W(x) must grow +at least logarithmically in ∥x∥ as ∥x∥ → ∞. +10 + +We now state the existence and uniqueness of solutions to (EW ) in the region RW . +Lemma 1-W. Take Assumptions W0, W1, and W2 to be true, and assume +√ +N/d ≥ 40 +√ +2( +√ +3 + +� +(2q)!), +where q is from Assumption W2. Then there exists a unique (m, S) = ( ˆmρ, ˆSρ) ∈ RW , +RW = {(m, S) ∈ Rd × Sd +++ : S ⪯ 2Id, ∥S∥ + ∥m∥2 ≤ 8}, +(3.14) +solving (EW ). The matrix ˆSρ furthermore satisfies +2 +3Id ⪯ ˆSρ ⪯ 2Id. +(3.15) +See Section 5 for the proof. Note that RW as defined here is the same as in Definition 3.1, since m∗W = 0 +and HW = ∇2W(0) = Id. We will make frequent use of the following inequality, which summarizes the +bounds on ˆmρ, ˆSρ guaranteed by the lemma: +∥ ˆmρ∥ ≤ 2 +√ +2, +2 +3Id ⪯ ˆSρ ⪯ 2Id. +(3.16) +Theorem 1-W. Take Assumptions W0, W1, W2, and W3 to be true, and let ( ˆmρ, ˆSρ) be as in the above +lemma. Then +∥ ˆS−1/2 +ρ +( ˆmρ − mρ)∥ ≲ +� +� +� +d3 +N +if W ∈ C3, +� +d3 +N +�3/2 +, +if W ∈ C4. +, +∥ ˆS−1/2 +ρ +Sρ ˆS−1/2 +ρ +− Id∥ ≲ d3 +N . +3.3 +From V to W and back +In the following sections, we prove Lemma 1-W and Theorem 1-W. In Lemma C.2 in the appendix, we show +that Assumptions V0-V3 imply Assumptions W0-W3 with N = nr and the same q. From these results, +Lemma 1 and Theorem 1 easily follow. +Proof of Lemma 1. Let ρ ∝ e−W , where W is defined as in (3.6). By Lemma C.2, the assumptions on V +imply the assumptions on W. Hence, we can apply Lemma 1-W to conclude there is a unique ( ˆmρ, ˆSρ) ∈ RW +solving (EW ), with 2 +3Id ⪯ ˆSρ ⪯ 2Id. Since W is an affine transformation of V , it follows by Lemma 3.1 that +there exists a unique ( ˆmπ, ˆSπ) ∈ RV solving (EV ), with ˆSπ = H−1/2 +V +ˆSρH−1/2 +V +. The inequality (2.7) for π +can be deduced from the corresponding inequality (3.15) for ˆSρ. +Proof of Theorem 1. First note that +∥ ˆmπ − mπ∥ ≤ ∥ ˆS1/2 +π +∥∥ ˆS−1/2 +π +( ˆmπ − mπ)∥ ≲ +1 +√nα2 +∥ ˆS−1/2 +π +( ˆmπ − mπ)∥, +and +∥ ˆSπ − Sπ∥ = ∥ ˆS1/2 +π +( ˆS−1/2 +π +Sπ ˆS−1/2 +π +− Id) ˆS1/2 +π +∥ +≲ +1 +nα2 +∥ ˆS−1/2 +π +Sπ ˆS−1/2 +π +− Id∥, +(3.17) +using the bound on ˆSπ from Lemma 1. Next note that by Lemma 3.1 (affine invariance) we have +∥ ˆS−1/2 +π +( ˆmπ − mπ)∥ = ∥ ˆS−1/2 +ρ +( ˆmρ − mρ)∥, +∥ ˆS−1/2 +π +Sπ ˆS−1/2 +π +− Id∥ = ∥ ˆS−1/2 +ρ +Sρ ˆS−1/2 +ρ +− Id∥. +(3.18) +Apply Theorem 1-W to conclude, recalling that N = nr and C2,3 = 1/r so that d3/N = d3/nr = C2,3d3/n. +11 + +3.4 +Overview of Theorem 1-W proof +For brevity let m = ˆmρ, S = ˆSρ, and σ = S1/2. We continue to denote the mean and covariance of ρ by mρ +and Sρ, respectively. Let ¯W(x) = W(m + σx) and note that the optimality equations (EW ) can be written +as +E [∇ ¯W(Z)] = 0, +E [∇2 ¯W(Z)] = Id. +(3.19) +The proof of Theorem 1-W is based on several key observations. +1) The optimality conditions (3.19) imply that the Hermite series expansion of ¯W is given by ¯W(x) = +const. + 1 +2∥x∥2 + r3(x), where +r3(x) = +� +k≥3 +1 +k!⟨ck( ¯W), Hk(x)⟩. +(3.20) +2) The assumptions on W imply that r3 ∼ N −1/2. +3) We can represent the quantities of interest from Theorem 1-W as expectations with respect to ¯X ∼ +¯ρ ∝ e− ¯ +W : +∥σ−1(mρ − m)∥ = sup +∥u∥=1 +E [f1,u( ¯X)], +∥σ−1Sρσ−1 − Id∥ ≤ sup +∥u∥=1 +E [f2,u( ¯X)] + ∥σ−1(mρ − m)∥2, +(3.21) +where +f1,u(x) = uT x, +f2,u(x) = (uT x)2 − 1. +4) We have +E [f( ¯X)] = E [f(Z)e−r3(Z)] +E [e−r3(Z)] += E [f(Z)(1 − r3(Z) + r3(Z)2/2 + . . . )] +E [e−r3(Z)] +(3.22) +5) We have E [f(Z)] = 0 and E [f(Z)r3(Z)] = 0 for f = f1,u, f2,u, because the remainder r3 is orthogonal +to linear and quadratic f with respect to the Gaussian measure. +Therefore, the leading order term in E [f( ¯X)] is 1 +2E [f(Z)r3(Z)2] ∼ N −1 for both f = f1,u and f = f2,u, +and hence by (3.21) the quantities of interest are no larger than N −1. This is the essence of the proof when +W ∈ C3. +Now that we have given this overview, let us go into a few more details about the above points, and +consider the case W ∈ C4. +1) We can write W(x) = const. + 1 +2∥x∥2 + r3(x), where r3 is the third order Hermite series +remainder. The Hermite series expansion of ¯W is defined as +¯W(x) = +∞ +� +k=0 +1 +k!⟨ck( ¯W), Hk(x)⟩, +ck( ¯W) := E [ ¯W(Z)Hk(Z)]. +(3.23) +Here, the ck and Hk(x) are tensors in (Rd)⊗k. Specifically, Hk(x) is the tensor of all order k Hermite +polynomials, enumerated as H(α) +k +, α ∈ [d]k with some entries repeating. For k = 0, 1, 2, the Hermite tensors +are given by +H0(x) = 1, +H1(x) = x, +H2(x) = xxT − Id. +See Appendix A.1 and B.1 for further details on Hermite series. Distinct Hermite polynomials are orthogonal +to each other with respect to the Gaussian weight. In particular, if f is an order k polynomial and ℓ > k +then +E [f(Z)H(α) +ℓ +(Z)] = 0, +∀α ∈ [d]ℓ. +12 + +In general, the Hk are given by +Hk(x)e−∥x∥2/2 = (−1)k∇ke−∥x∥2/2. +(3.24) +This representation of the Hermite polynomials leads to the following, “Gaussian integration by parts” +identity for a k-times differentiable function f: +E [f(Z)Hk(Z)] = E [∇kf(Z)]. +(3.25) +This is a generalization of Stein’s identity, E [Zif(Z)] = E [∂xif(Z)]. +Since ¯W is at least three times +differentiable, we can use Gaussian integration by parts to write c1, c2 as +c1( ¯W) := E [ ¯W(Z)H1(Z)] = E [∇ ¯W(Z)] = 0, +c2( ¯W) := E [ ¯W(Z)H2(Z)] = E [∇2 ¯W(Z)] = Id, +(3.26) +where the last equality in each line comes from the optimality conditions (3.19). Therefore the Hermite +series expansion of ¯W takes the form +¯W(x) = E [ ¯W(Z)] + ⟨0, H1(x)⟩ + 1 +2⟨Id, H2(x)⟩ + r3(x) += E [ ¯W(Z)] + 0 + 1 +2(∥x∥2 − d) + r3(x) += const. + 1 +2∥x∥2 + r3(x), +(3.27) +where r3 is the third order remainder. +2) The assumptions imply r3 ∼ 1/ +√ +N. Indeed, since W is C3 and k ≥ 3, we can apply “partial” +Gaussian integration by parts to express ck as +ck = E [Hk(Z) ¯W(Z)] = E [Hk−3(Z) ⊗ ∇3 ¯W(Z)]. +But by Assumptions W1 and W2 we have that ∥∇3W∥ ∼ 1/ +√ +N, and hence ∥∇3 ¯W∥ ≤ ∥σ∥3∥∇W∥ ∼ 1/ +√ +N, +since σ ⪯ +√ +2Id by Lemma 1-W. Therefore each ck ∼ 1/ +√ +N for k ≥ 3, so r3 ∼ 1/ +√ +N as well. +Now suppose W ∈ C4, and write r3 as r3(x) = +1 +3!⟨c3, H3(x)⟩ + r4(x). We know ⟨c3, H3(x)⟩ ∼ 1/ +√ +N, +and by an analogous argument as for r3, we can show that each of the coefficients ck, k ≥ 4 has order 1/N. +Hence r4 ∼ 1/N, so that r3 = O(N −1/2) + O(N −1) and r2 +3 = O(N −1) + O(N −3/2) + O(N −2). We can then +show that the order N −1 term in r2 +3 is orthogonal to f1,u with respect to the Gaussian weight, and hence +E [f1,u(Z)r3(Z)2] is order N −3/2. This is why the mean error is smaller when W ∈ C4. +We will prove 3) in the next section, and 4) follows directly from the representation (3.27). 5) follows +from the definition of r3 as a sum of third and higher order Hermite polynomials. This discussion explains +how the N −1 and N −3/2 scalings arise in Theorem 1-W. Obtaining the correct scaling with dimension d +requires a bit more work. The scaling with d of the overall error bound depends, among other things, on the +scaling with d of expectations of the form E [rk(Z)p], k = 3, 4 (see Lemma 4.1 below for further discussion +of the bound’s d-dependence). +We show that E [rk(Z)p] ∼ E [ +� +∥Z∥k�p] ∼ dpk/2 using the following explicit formula for rk. This result +is known in one dimension; see Section 4.15 in [Leb72]. However, we could not find the multidimensional +version in the literature, so we have proved it here. +Proposition 3.1. Assume ¯W ∈ Ck for k = 3 or k = 4. Let ¯W(x) = �∞ +j=0 +1 +j!⟨cj( ¯W), Hj(x)⟩ be the Hermite +series expansion of ¯W, and define +rk(x) = ¯W(x) − +k−1 +� +j=0 +1 +j!⟨cj( ¯W), Hj(x)⟩. +(3.28) +Then +rk(x) = +� 1 +0 +(1 − t)k−1 +(k − 1)! E +�� +∇k ¯W ((1 − t)Z + tx) , Hk(x) − Z ⊗ Hk−1(x) +�� +dt. +(3.29) +13 + +Note that (3.29) is analogous to the integral form of the remainder of a Taylor series. We state and prove +this proposition in greater generality in Appendix B.1 below. Carefully applying Cauchy-Schwarz to the +inner product in this formula (using the operator norm rather than the Frobenius norm, which would incur +additional dimension dependence), allows us to bound E [|rk(Z)|p] by a product of expectations. One ex- +pectation involves ∥∇k ¯W∥p ∼ N p(1−k/2), and the other expectation, stemming from the Hk and Z ⊗ Hk−1 +on the right-hand side of the inner product, involves a (pk)th degree polynomial in ∥Z∥. This explains the +dpk/2 scaling of E [|rk(Z)|p]. +4 +Proof of Theorem 1-W +Let ¯X ∼ ¯ρ ∝ e− ¯ +W , where ¯W(x) = W(m + σx), and σ = ˆS1/2 +ρ +, m = ˆmρ are from Lemma 1-W. Also, let +r3(x) = +� +k≥3 +1 +k!⟨ck( ¯W), Hk(x)⟩ +be the remainder of the Hermite expansion of ¯W. +Lemma 4.1 (Preliminary Bound). If v ∈ C3, then we have +∥σ−1(m − mρ)∥ +≲ +� +E r3(Z)4 + +� +E r3(Z)6 + +� +E r3( ¯X)6 sup +∥u∥=1 +� +E (uT ¯X)2 +(4.1) +and +∥σ−1Sρσ−1 − Id∥ ≲∥σ−1(m − mρ)∥2 + +� +E r3(Z)4 + +� +E r3(Z)6 ++ +� +E r3( ¯X)6 sup +∥u∥=1 +� +E ((uT ¯X)2 − 1)2 +(4.2) +If v ∈ C4, then +∥σ−1(m − mρ)∥ ≲ +� +E r3(Z)6 + +� +E r3( ¯X)6 sup +∥u∥=1 +� +E (uT ¯X)2 ++ sup +∥u∥=1 +��� +u ⊗ c3 ⊗ c4, E [Z ⊗ H3(Z) ⊗ H4(Z)] +��� + +� +E r4(Z)4. +(4.3) +Remark 4.1. From the discussion in the previous section, we know c3, r3 ∼ N −1/2 and c4, r4 ∼ N −1. +Therefore, we can easily read off the N-dependence of the overall error bound from (4.1) and (4.3). The +d-dependence of the terms of the form +� +E [rk(Z)p] can be computed from our explicit formula for rk, as +discussed above. Furthermore, simple Laplace-type integral bounds in Section 4.3 show that E [f( ¯X)] ≲ +E [f(Z)], so the d-dependence of the ¯X expectations is the same as that of the Z expectations. Finally, the +d-dependence of ⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3 ⊗ H4]⟩ can be estimated using the structure of the Hermite tensors; +in particular, we show at most O(d4) of the d8 entries of E [Z ⊗ H3 ⊗ H4] are nonzero. +Proof. First, we prove point 3) from the above proof overview. Recall that f1,u(x) = uT x and f2,u(x) = +(uT x)2 − 1. Note that we can write ¯X = σ−1(X − m), where X ∼ ρ ∝ e−W . Therefore, E ¯X = σ−1(mρ − m) +and hence +∥σ−1(mρ − m)∥ = ∥E ¯X∥ = sup +∥u∥=1 +E [uT ¯X] = sup +∥u∥=1 +E [f1,u( ¯X)]. +Next, note that Cov( ¯X) = σ−1Sρσ−1, and hence +∥σ−1Sρσ−1 − Id∥ = ∥Cov( ¯X) − Id∥ ≤ ∥E [ ¯X ¯XT − Id]∥ + ∥E ¯XE ¯XT ∥ +≤ sup +∥u∥=1 +E [uT ( ¯X ¯XT − Id)u] + ∥E ¯X∥2 += sup +∥u∥=1 +E [(uT ¯X)2 − 1] + ∥E ¯X∥2 += sup +∥u∥=1 +E [f2,u( ¯X)] + ∥σ−1(mρ − m)∥2. +(4.4) +14 + +Now, recalling that ¯W(x) = const. + ∥x∥2/2 + r3(x), note that +E [f( ¯X)] = E [f(Z)e−r3(Z)] +E [e−r3(Z)] +. +(4.5) +Write +e−r3(Z) = 1 − r3(Z) + 1 +2r3(Z)2 − 1 +3!r3(Z)3eξ(Z), +where ξ(Z) lies on the interval between 0 and −r3(Z). The key insight is that for f = f1,u and f = f2,u (at +most second order polynomials), f is orthogonal to both 1 and r3, since r3 is a series of Hermite polynomials +of order greater than 2. Therefore, +E +� +f(Z)e−r3(Z)� += E +� +f(Z) +� +1 − r3(Z) + 1 +2r3(Z)2 − 1 +3!r3(Z)3eξ(Z) +�� += E +� +f(Z) +�1 +2r3(Z)2 − 1 +3!r3(Z)3eξ(Z) +�� +. +(4.6) +Combining (4.6) with (4.5), we get +E[f(Z)] = 1 +2 +E +� +f(Z)r3(Z)2� +E +� +e−r3(Z)� +− 1 +3! +E [f(Z)r3(Z)3eξ(Z)] +E +� +e−r3(Z)� +=: I1 + I2. +(4.7) +Using Jensen’s inequality and that E [r3(Z)] = 0, we have E [e−r3(Z)] ≥ 1. Hence, +|I1| ≲ +��E +� +f(Z)r3(Z)2��� . +(4.8) +To bound I2, note that eξ ≤ 1 + e−r3, since ξ ≤ 0 if r3 ≥ 0 and ξ ≤ −r3 if r3 ≤ 0. Hence, +|I2| ≲ +E +� +|f(Z)| |r3(Z)|3� +E +� +e−r3(Z)� ++ +E +� +|f(Z)| |r3(Z)|3 e−r3(Z)� +E +� +e−r3(Z)� +≤ E +� +|f(Z)| |r3(Z)|3� ++ +E +� +|f(Z)| |r3(Z)|3 e−r3(Z)� +E +� +e−r3(Z)� +, +(4.9) +again using that E [e−r3(Z)] ≥ 1. Furthermore, using the conversion between Z and ¯X expectations (4.5), +observe that +E +� +|f(Z)| |r3(Z)|3 e−r3(Z)� +E +� +e−r3(Z)� += E +���f( ¯X) +�� ��r3( ¯X) +��3� +. +Incorporating this into the above bound on |I2| we get +|I2| ≤ E +� +|f(Z)| |r3(Z)|3� ++ E +���f( ¯X) +�� ��r3( ¯X) +��3� +. +(4.10) +Applying Cauchy-Schwarz to (4.10) we get +|I2| ≤ +� +E [r3(Z)6] +� +E f(Z)2 + +� +E r3( ¯X)6 +� +E f( ¯X)2. +Adding this inequality to (4.8), we get +��E [f( ¯X)] +�� ≲ +��E +� +f(Z)r3(Z)2��� + +� +E [r3(Z)6] +� +E f(Z)2 ++ +� +E +� +r3( ¯X)6�� +E +� +f( ¯X)2� +. +(4.11) +Taking f(x) = uT x and f(x) = (uT x)2−1 and applying Cauchy-Schwarz to the first term in (4.11) gives (4.1) +and (4.2), respectively. If v ∈ C4 and f(x) = uT x, we can refine the bound (4.11), specifically the first term +E [(uT Z)r3(Z)2]. Write +r3(x) = 1 +3! ⟨c3, H3(x)⟩ + r4(x). +15 + +Then +r3(x)2 = +1 +(3!)2 +� +c⊗2 +3 , H3(x)⊗2� ++ 2 +3!r4(x) ⟨c3, H3(x)⟩ + r4(x)2. +(4.12) +To get the first summand on the right we use the fact that ⟨T, S⟩2 = ⟨T ⊗2, S⊗2⟩. Substituting x = Z +in (4.12), multiplying by the scalar uT Z, and taking the expectation of the result gives +E +�� +uT Z +� +r3(Z)2� += +1 +(3!)2 +� +c⊗2 +3 , E +�� +uT Z +� +H3(Z)⊗2�� ++ 2 +3!E +� +(uT Z)r4(Z) ⟨c3, H3(Z)⟩ +� ++ E [(uT Z)r4(Z)2] += 2 +3!E +� +(uT Z)r4(Z) ⟨c3, H3(Z)⟩ +� ++ E +�� +uT Z +� +r4(Z)2� +. +(4.13) +For the term on the right-hand side of the first line of (4.13), note that we have chosen to move the scalar +uT Z onto the second tensor H⊗2 +3 +in the tensor dot product, and we take the Z expectation only after doing +so. This term drops out in the second line because each entry of (uT Z)H3(Z)⊗2 is a polynomial containing +only odd powers of Z. To see why, see the primer on Hermite polynomials in Section A.1. +Next, let g(x) = (uT x) ⟨c3, H3(x)⟩, so that +E +� +(uT Z)r4(Z) ⟨c3, H3(Z)⟩ +� += E [g(Z)r4(Z)]. +Since E [g(Z)2] < ∞ and r4 is the tail of a convergent Hermite series, we have +E [g(Z)r4(Z)] = E +� ∞ +� +k=4 +g(Z) 1 +k! ⟨ck, Hk(Z)⟩ +� += +∞ +� +k=4 +1 +k!E [g(Z) ⟨ck, Hk(Z)⟩]. +Furthermore, g is a fourth order polynomial, and is therefore orthogonal to all Hermite polynomials of order +greater than four. As a result, the above sum simplifies to +E [g(Z)r4(Z)] = 1 +4!E [g(Z) ⟨c4, H4(Z)⟩] += 1 +4!E [(uT Z) ⟨c3, H3(Z)⟩ ⟨c4, H4(Z)⟩] += 1 +4! ⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3(Z) ⊗ H4(Z)]⟩ . +(4.14) +Combining these calculations and applying Cauchy-Schwarz to the term E [(uT Z)r4(Z)2] gives the prelimi- +nary bound (4.3). +4.1 +Combining the bounds +In the following sections, we bound each of the terms appearing in (4.1), (4.2), (4.3). For convenience, we +compile these bounds below, letting τ = d3/N. +Lemma 4.2 gives +|⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3 ⊗ H4]⟩| ≲ d +7 +2 +N +3 +2 ≤ τ +3 +2 +(4.15) +Corollary 4.1 applied with Y = Z gives +� +E [r3(Z)4] ≲ d3 +N = τ +� +E [r3(Z)6] ≲ +�d3 +N +� 3 +2 += τ +3 +2 +� +E [r4(Z)4] ≲ d4 +N 2 ≤ τ 2. +(4.16) +16 + +Corollary 4.1 applied with Y = ¯X, together with Corollary 4.2, give +� +E [r3( ¯X)6] ≲ e +√ +d3/N +�d3 +N +� 3 +2 += e +√ττ +3 +2 . +Finally, Corollary 4.3 gives +sup +∥u∥=1 +� +E [(uT ¯X)2] ≲ e +√ +d3/N = e +√τ, +sup +∥u∥=1 +� +E [((uT ¯X)2 − 1)2] ≲ e +√ +d3/N = e +√τ. +(4.17) +Substituting all of these bounds into (4.1), (4.2), and (4.3) finishes the proof of Theorem 1-W. +4.2 +Hermite-related Bounds +In this section we bound ⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3(Z) ⊗ H4(Z)]⟩ as well as E [rk(Z)p] for k = 3, 4, p = 4, 6 and +E [r3( ¯X)6]. We take all of the assumptions to be true, either in the W ∈ C3 case or W ∈ C4. +Lemma 4.2. If v ∈ C4 then +|⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3(Z) ⊗ H4(Z)]⟩| ≲ d7/2N −3/2. +(4.18) +Proof. We use Lemma B.3 in Appendix B.1, which shows that +⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3(Z) ⊗ H4(Z)]⟩ = ⟨u ⊗ c3, c4⟩. +(4.19) +Writing c3 = �d +i,j,k=1 cijk +3 ei ⊗ ej ⊗ ek and noting that |cijk +3 | ≤ ∥c3∥, we get +|⟨u ⊗ c3, c4⟩| ≤ +d +� +i,j,k=1 +|cijk +3 | |⟨u ⊗ ei ⊗ ej ⊗ ek, c4⟩| +≤ d3∥c3∥∥c4∥ +(4.20) +As explained in Section 3.4, since v ∈ C4 we have ck = ck( ¯W) = E [∇k ¯W(Z)], k = 3, 4. Hence +∥c3∥ ≤ E ∥∇3 ¯W(Z)∥ ≤ ∥σ∥3E ∥∇3W(m + σZ)∥ ≲ N −1/2. +(4.21) +To get the last inequality, we used (3.16) to bound ∥m∥, ∥σ∥ by a constant, and we applied Lemma 3.4 with +Y = m + σZ, p = 1, k = 3. Note that E [∥(m + σZ)/ +√ +d∥s] ≲ 1 for any s ≥ 0, so the bound in the lemma +reduces to N −1/2. The lemma applies since E [∥m + σZ∥s] ≲ +√ +d +s for all s ≥ 0. Analogously, +∥c4∥ ≲ N −1. +(4.22) +Substituting the bounds (4.21), (4.22) into (4.20) and using the equality (4.19) gives the bound in the +statement of the lemma. +We now compute bounds on expectations of the form E [|rk(Z)|p], k = 3, 4, and on E [r3( ¯X)6]. Using the +exact formula (3.29) for rk, we obtain the following bound: +Corollary 4.1 (Corollary (B.1) in Appendix B.2). Let k = 3 if W ∈ C3 and k = 4 if W ∈ C4. Let Y ∈ Rd +be a random variable such that E [∥Y ∥s] < ∞ for all 0 ≤ s ≤ 2pk + 2pq, where q is from Assumption W2. +Then +E [|rk(Y )|p] ≲ +� +dk +N k−2 +� p +2 �� +E ∥Y/ +√ +d∥2kp + +� +E ∥Y/ +√ +d∥2(k−1)p + 1 +� +× +� +1 + +� +E +� +∥Y/ +√ +d∥2pq�� +(4.23) +17 + +Taking Y = Z, the expectations in (4.23) are all bounded by constants, so we immediately obtain +E [|rk(Z)|p] ≲ +� +dk +N k−2 +� p +2 +. +The corollary also applies to Y = ¯X, k = 3, p = 6. This is because, as we show in Corollary 4.2 in the +next section, E [∥X/ +√ +d∥s] ≲ exp(2 +� +d3/N) < ∞ for all s ≤ 36 + 12q = 2pk + 2pq. Since ¯X = σ−1(X − m), +using (3.16) we conclude that also E [∥ ¯X/ +√ +d∥s] ≲ exp(2 +� +d3/N) < ∞ for all s ≤ 36 + 12q. Hence (4.23) +gives +E [r3( ¯X)6] ≲ exp +� +2 +� +d3/N +� �d3 +N +�3 +. +4.3 +Bounds on X Moments +In this section we bound expectations of the form E [(aT X)p], ∥a∥ ≲ 1, and E [∥X∥p], both of which take +the form +E [f(X)] = +� +Rd f(x)e−W (x)dx +� +Rd e−W (x)dx +, +0 ≤ f(x) ≲ ∥x∥p. +(4.24) +To evaluate this integral, we break up the numerator into inner, middle, and outer regions +I = +� +∥x∥ ≤ 2 +√ +2 +√ +d +� +, +M = +� +2 +√ +2 +√ +d ≤ ∥x∥ ≤ +√ +N +� +, +O = +� +∥x∥ ≥ +√ +N +� +. +We then bound E [f(X)] as +E [f(X)] = E +� +f(X)1I(X) +� ++ E +� +f(X)1M(X) +� ++ E +� +f(X)1O(X) +� +≲ +1 +� +I e−W (x)dx +�� +I +f(x)e−W (x)dx + +� +M +∥x∥pe−W (x)dx + +� +O +∥x∥pe−W (x)dx +� +. +(4.25) +The inner region I is chosen so that (1) for x ∈ I, we can approximate e−W (x) by e−∥x∥2/2 and (2) the +standard Gaussian density places O(1) mass on I. This will allow us to show that +� +I f(x)e−W (x)dx +� +I e−W (x)dx +≲ E [f(Z)]. +The middle region M is chosen so that (1) e−W (x) is bounded by another, greater variance Gaussian +density, namely e−∥x∥2/4, and (2) this density places exponentially little mass on M. +The bound on +� +M ∥x∥pe−W (x)dx/ +� +I e−W (x)dx therefore involves a ratio of Gaussian normalization constants that grows +exponentially in d, but this growth is neutralized by the exponentially decaying Gaussian tail probability. +Finally, in O we use Assumption W3 to bound the integral +� +O ∥x∥pe−W (x) by a number decaying expo- +nentially in n times the tail integral of a function ∥x∥−r. The following four short lemmas carry out this +program. We let τ = d3/N in the statements and proofs below. +Lemma 4.3. We have +� +∥x∥≤2 +√ +2 +√ +d +e−W (x)dx ≳ e−2√τ√ +2π +d, +where τ = d3/N. +Proof. By Lemma 3.3 with C = 2 +√ +2, we have +e−W (x) ≥ e−∥x∥2/2e− 4 +√ +2 +3 +d +√ +d/ +√ +N ≥ e−∥x∥2/2e−2√τ, +∥x∥ ≤ 2 +√ +2 +√ +d. +Therefore, +� +∥x∥≤2 +√ +2 +√ +d +e−W (x)dx ≥ e−2√τ +� +∥x∥≤2 +√ +2 +√ +d +e− 1 +2 ∥x∥2dx += e−2√τ√ +2π +dP(∥Z∥ ≤ 2 +√ +2 +√ +d) ≳ e−2√τ√ +2π +d, +(4.26) +as desired. +18 + +Lemma 4.4. Let f ≥ 0. Then E [f(X)1I(X)] ≲ e2√τE [f(Z)]. In particular, E [∥X∥p1I(X)] ≲ e2√τdp/2. +Proof. Using Lemma 4.3 and Lemma 3.3, +E [f(X){X ∈ I}] ≤ +� +I f(x)e−W (x)dx +� +I e−W (x)dx +≲ e2√τ +√ +2π +d +� +I +f(x)e−W (x)dx +≲ e2√τ +√ +2π +d +� +I +f(x)e− 1 +2 ∥x∥2dx ≤ e2√τE [f(Z)], +(4.27) +as desired. +Lemma 4.5. We have E [∥X∥p1M(X)] ≲ e2√τ for all p ≥ 0. +Proof. Using Lemma 4.3 and Lemma 3.3, +E [∥X∥p1M(X)] ≤ +� +M ∥x∥pe−W (x)dx +� +I e−W (x)dx +≲ e2√τ +√ +2π +d +� +M +∥x∥pe−∥x∥2/4dx +≤ e2√τ +√ +2π +d +� +∥x∥≥2 +√ +2 +√ +d +∥x∥pe−∥x∥2/4dx +(4.28) +We now change variables as x = +√ +2y, so that ∥x∥pdx is bounded above by +√ +2 +d+p∥y∥pdy. Hence +E [∥X∥p1M(X)] ≲ +√ +2 +d+p� e2√τ +√ +2π +d +� +∥y∥≥2 +√ +d +∥y∥pe− 1 +2 ∥y∥2dz +� +≲ e2√τ√ +2 +d+pE +� +∥Z∥p{∥Z∥ ≥ 2 +√ +d} +� +≲ e2√τ �√ +2 +d+pdp/2e−d/4� +≲ e2√τ. +(4.29) +Lemma 4.6. For all p ≤ 12q + 36 we have E [∥X∥p1O(X)] ≲ e2√τ. +Proof. Using Lemma 4.3 and Assumption W3, we get +E [∥X∥p1O(X)] ≤ +� +∥x∥≥ +√ +N ∥x∥pe−W (x)dx +� +I e−W (x)dx +≲ e2√τ +� +∥x∥≥ +√ +N +∥x∥p−d−12q−36dx +≲ e2√τ +� ∞ +√ +N +rp−12q−36−1dr +≲ e2√τ√ +N +p−12q−36 ≲ e2√τ. +(4.30) +In the third line, we left out the surface area of the (d − 1)-sphere, which is an at most O(1) factor. +The above three lemmas immediately imply +Corollary 4.2. For all p ≤ 12q + 36 we have E [∥X∥p] ≲ dp/2e2√τ. +We also have +19 + +Corollary 4.3. Let ¯X = σ−1(X − m), where ∥σ−1∥, ∥m∥ ≲ 1. +If ∥u∥ = 1 then E [(uT ¯X)2] ≲ e2√τ, +E [((uT ¯X)2 − 1)2] ≲ e2√τ. +Proof. We have E [((uT ¯X)2 − 1)2] ≲ E [(uT ¯X)4] + 1, so it suffices to show E [(uT ¯X)k] ≲ e2√τ for k = 2, 4. +Since ¯X = σ−1(X − m), we have +E [(uT ¯X)k] ≲ E [(uT σ−1X)k] + ∥σ−1m∥k. +By the assumptions on σ and m, the term ∥σ−1m∥k is bounded by a constant, so it remains to show +E [(aT X)k] ≲ e2√τ, where a = σ−1u. Using Lemmas 4.4, 4.5, and 4.6 and noting that ∥a∥ ≲ 1, we get +E [(aT X)k] ≲ E [(aT X)k1I(X)] + E [∥X∥k1M(X)] + E [∥X∥k1O(X)] +≲ e2√τ(E [(aT Z)k] + 1) ≲ e2√τ, +(4.31) +as desired. +5 +Proof of Lemma 1-W +In this section, we use m ∈ Rd, σ ∈ Rd×d to denote generic arguments. Consider the equations (EW ), which +we rewrite in the following form: +E [∇W(m + σZ)] = 0, +(5.1) +E [∇2W(m + σZ)] = (σσT )−1. +(5.2) +Note that these equations are well-defined for all (σ, m) ∈ Rd×d×Rd, although we can only expect uniqueness +of solutions in a subset of Sd +++ × Rd; indeed, (5.1) and (5.2) only depend on σ through S = σσT , which has +multiple solutions σ. We now restate Lemma 1-W using the following notation: +Br(0, 0) = {(σ, m) ∈ Rd×d × Rd : ∥σ∥2 + ∥m∥2 ≤ r2}, +Br = {σ ∈ Rd×d : ∥σ∥ ≤ r}, +Sc1,c2 = {σ ∈ Sd ++ : c1Id ⪯ σ ⪯ c2Id}. +(5.3) +In particular, note that S0,r ⊂ Br. +Lemma 5.1. Let W satisfy Assumptions W0, W1, W2, and W3, and assume +√ +N/d ≥ 40 +√ +2( +√ +3+ +� +(2q)!). +Let r = 2 +√ +2, c1 = +� +2/3, and c2 = +√ +2. There exists a unique pair (σ, m) ∈ Br(0, 0) ∩ S0,r/2 × Rd satisfying +(5.1) and (5.2), and this pair is such that σ ∈ Sc1,c2. +Let us sketch the proof of the lemma. Let f : Rd×d × Rd → Rd be given by f(σ, m) = E [∇W(m + +σZ)]. Note that f(0, 0) = 0, so by the Implicit Function Theorem, there exists a map m(σ) defined in a +neighborhood of σ = 0 such that f(σ, m(σ)) = 0. In Lemma 5.3, we make this statement quantitative, +showing that for r = 2 +√ +2 we have the following result: for every σ ∈ Br/2 there is a unique m = m(σ) such +that (σ, m) ∈ Br(0, 0) and f(σ, m) = 0. Since S0,r/2 ⊂ Br/2, we have in particular that any solution (σ, m) +to (5.1) in the region Br(0, 0) ∩ S0,r/2 × Rd is of the form (σ, m(σ)). Thus it remains to prove there exists +a unique solution σ ∈ S0,r/2 to the equation E [∇2W(m(σ) + σZ)] = (σσT )−1. To do so, we rewrite this +equation as F(σ) = σ, where +F(σ) = E [∇2W(m(σ) + σZ)]−1/2. +We show in Lemma 5.4 that F is well-defined on S0,r/2, a contraction, and satisfies F(S0,r/2) ⊂ Sc1,c2 ⊂ +S0,r/2. Thus by the Contraction Mapping Theorem, there is a unique σ ∈ S0,r/2 satisfying F(σ) = σ. But +since F maps S0,r/2 to Sc1,c2, the fixed point σ necessarily lies in Sc1,c2. This finishes the proof. +Using a quantitative statement of the Inverse Function Theorem given in [Lan93], the following lemma +determines the size of the neighborhood in which the map m(σ) is defined. +20 + +Lemma 5.2. Let f = (f1, . . . , fd) : Rd×d ×Rd → Rd be C3, where Rd×d is the set of d×d matrices, endowed +with the standard matrix operator norm. Suppose f(0, 0) = 0, ∇σf(0, 0) = 0, ∇mf(σ, m) is symmetric for +all m, σ, and ∇mf(0, 0) = Id. Let r > 0 be such that +sup +(σ,m)∈Br(0,0) +∥∇f(σ, m) − ∇f(0, m∗)∥op ≤ 1 +4. +(5.4) +Then for each σ ∈ Rd×d such that ∥σ∥ ≤ r/2 there exists a unique m = m(σ) ∈ Rd such that f(σ, m(σ)) = 0 +and (σ, m(σ)) ∈ Br(0, 0). Furthermore, the map σ �→ m(σ) is C2, with +1 +2Id ⪯ ∇mf(σ, m) +�� +m=m(σ) ⪯ 3 +2Id, +∥∇σm(σ)∥op ≤ 1. +(5.5) +See Appendix E for careful definitions of the norms appearing above, as well as the proof of the lemma. +Lemma 5.3. Let f : Rd×d × Rd → Rd be given by f(σ, m) = E [∇W(σZ + m)]. Then all the conditions of +Lemma 5.2 are satisfied; in particular, (5.4) is satisfied with r = 2 +√ +2. Thus the conclusions of Lemma 5.2 +hold with this choice of r. +Lemma 5.4. Let r = 2 +√ +2 and σ ∈ S0,r/2 �→ m(σ) ∈ Rd be the restriction of the map furnished by +Lemmas 5.2 and 5.3 to symmetric nonnegative matrices. Then the function F given by +F(σ) = E [∇2W(m(σ) + σZ)]−1/2 +is well-defined and a strict contraction on S0,r/2. Moreover, +F(S0,r/2) ⊆ Sc1,c2 ⊆ S0,r/2, +where c1 = +� +2/3, c2 = +√ +2 = r/2. +This lemma concludes the proof of Lemma 5.1 since by the Contraction Mapping Theorem there is +a unique fixed point σ ∈ S0,r/2 of F, and F(σ) = σ is simply a reformulation of the second optimality +equation (5.2). We know σ must lie in Sc1,c2 since F maps S0,r/2 to this set. See Appendix E for the proofs +of the above lemmas. +Acknowledgments +A. Katsevich is supported by NSF grant DMS-2202963. P. Rigollet is supported by NSF grants IIS-1838071, +DMS-2022448, and CCF-2106377. +A +Hermite Series Remainder +A.1 +Brief Primer +Here is a brief primer on Hermite polynomials, polynomials, and series expansions. We let Hk : R → R, +k = 0, 1, 2, . . . be the kth order probabilist’s Hermite polynomial. We have H0(x) = 1, H1(x) = x, H2(x) = +x2 − 1, H3(x) = x3 − 3x. For all k ≥ 1, we can generate Hk+1 from the recurrence relation +Hk+1(x) = xHk(x) − kHk−1(x), +k ≥ 1. +(A.1) +In particular, Hk(x) is an order k polynomial given by a sum of monomials of the same parity as k. The Hk +are orthogonal with respect to the Gaussian measure; namely, we have E [Hk(Z)Hj(Z)] = k!δjk. We also +note for future reference that +E [ZHk(Z)Hk+1(Z)] = E [(Hk+1(Z) + kHk−1(Z)) Hk+1(Z)] = (k + 1)!, +(A.2) +using the recurrence relation (A.1). +21 + +The Hermite polynomials are given by products of Hermite polynomials, and are indexed by γ ∈ +{0, 1, 2, . . . }d. Let γ = (γ1, . . . , γd), with γj ∈ {0, 1, 2, . . . }. Then +Hγ(x1, . . . , xd) = +d +� +j=1 +Hγj(xj), +which has order |γ| := �d +j=1 γj. Note that if |γ| = k then Hγ(x) is given by a sum of monomials of the +same parity as k. Indeed, each Hγj(xj) is a linear combination of xγj−2p +j +, p ≤ ⌊γj/2⌋. Thus Hγ(x) is a +linear combination of monomials of the form �d +j=1 xγj−2pj +j +, which has total order k − 2 � +j pj. Using the +independence of the entries of Z = (Z1, . . . , Zd), we have +E [Hγ(Z)Hγ′(Z)] = γ! +d +� +j=1 +δγj,γ′ +j, +where γ! := �d +j=1 γj!. The Hγ can also be defined explicitly as follows: +e−∥x∥2/2Hγ(x) = (−1)|γ|∂γ � +e−∥x∥2/2� +, +(A.3) +where ∂γf(x) = ∂γ1 +x1 . . . ∂γd +xdf(x). This leads to the useful Gaussian integration by parts identity, +E [f(Z)Hγ(Z)] = E [∂γf(Z)], +if f ∈ C|γ|(Rd). +The Hermite polynomials Hγ, γ ∈ {0, 1, . . . }d, form a complete orthogonal basis of the Hilbert space +of functions f : Rd → R with inner product ⟨f, g⟩ = E [f(Z)g(Z)]. In particular, if f : Rd → R satisfies +E [f(Z)2] < ∞, then f has the following Hermite expansion: +f(x) = +� +γ∈{0,1,... }d +1 +γ!cγ(f)Hγ(x), +cγ(f) := E [f(Z)Hγ(Z)]. +(A.4) +Let +rk(x) = f(x) − +� +|γ|≤k−1 +1 +γ!cγ(f)Hγ(x) +(A.5) +be the remainder of the Hermite series expansion of f after taking out the order ≤ k − 1 polynomials. We +can write rk as an integral of f against a kernel. Namely, define +K(x, y) = +� +|γ|≤k−1 +1 +γ!Hγ(x)Hγ(y). +(A.6) +Note that +E [f(Z)K(x, Z)] = +� +|γ|≤k−1 +1 +γ!cγ(f)Hγ(x) +is the truncated Hermite series expansion of f. Therefore, the remainder rk can be written as +rk(x) = f(x) − E [f(Z)K(x, Z)] = E [(f(x) − f(Z))K(x, Z)], +using that E [K(x, Z)] = 1. +B +Exact Expression for the Remainder +Lemma B.1. Let k ≥ 1 and rk, K, be as in (A.5), (A.6), respectively. Assume that f ∈ C1, and that +∥∇f(x)∥ ≲ ec∥x∥2 for some 0 ≤ c < 1/2. Then +rk(x) = E [(f(x) − f(Z))K(x, Z)] += +� 1 +0 +d +� +i=1 +� +|γ|=k−1 +1 +γ!E [∂if((1 − t)Z + tx) (Hγ+ei(x)Hγ(Z) − Hγ+ei(Z)Hγ(x))]dt. +(B.1) +22 + +The proof relies on the following identity: +Lemma B.2. For each i = 1, . . . , d, it holds that +K(x, y) = +1 +xi − yi +� +|γ|=k−1 +1 +γ! (Hγ+ei(x)Hγ(y) − Hγ+ei(y)Hγ(x)) . +(B.2) +The proof of this identity is given at the end of the section. +Proof of Lemma B.1. Write +f(x) − f(Z) = +� 1 +0 +(x − Z)T ∇f((1 − t)Z + tx)dt += +d +� +i=1 +� 1 +0 +(xi − Zi)∂if((1 − t)Z + tx)dt, +(B.3) +so that, using (B.2), we have +E [(f(x) − f(Z))K(x, Z)] += +d +� +i=1 +� +|γ|=k−1 +1 +γ!E +�� 1 +0 +∂if((1 − t)Z + tx) (Hγ+ei(x)Hγ(Z) − Hγ+ei(Z)Hγ(x)) dt +� +. +(B.4) +By assumption, +sup +t∈[0,1] +|∂if((1 − t)Z + tx)| (|Hγ(Z)| + |Hγ+ei(Z)|) +≲ exp +� +c∥Z∥2 + 2c∥Z∥∥x∥ +� +(|Hγ(Z)| + |Hγ+ei(Z)|) +(B.5) +for some 0 ≤ c < 1/2. The right-hand side is integrable with respect to the Gaussian measure, and therefore +we can interchange the expectation and the integral in (B.4). Therefore, +E [(f(x) − f(Z))K(x, Z)] += +� 1 +0 +d +� +i=1 +� +|γ|=k−1 +1 +γ!E [∂if((1 − t)Z + tx) (Hγ+ei(x)Hγ(Z) − Hγ+ei(Z)Hγ(x))]dt. +(B.6) +Proof of Lemma B.2. Without loss of generality, assume i = 1. To simplify the proof, we will also assume +d = 2. The reader can check that the proof goes through in the same way for general d. By the recursion +relation (A.1) for 1d Hermite polynomials, we get that +Hγ1+1,γ2(x) = x1Hγ1,γ2(x) − γ1Hγ1−1,γ2(x), +where x = (x1, x2). Multiply this equation by Hγ(y) (where y = (y1, y2)) and swap x and y, to get the two +equations +Hγ1+1,γ2(x)Hγ1,γ2(y) = x1Hγ1,γ2(x)Hγ1,γ2(y) − γ1Hγ1−1,γ2(x)Hγ1,γ2(y), +Hγ1+1,γ2(y)Hγ1,γ2(x) = y1Hγ1,γ2(x)Hγ1,γ2(y) − γ1Hγ1−1,γ2(y)Hγ1,γ2(x). +(B.7) +Let +Sγ1,γ2 = Hγ1+1,γ2(x)Hγ1,γ2(y) − Hγ1+1,γ2(y)Hγ1,γ2(x). +Subtracting the second equation of (B.7) from the first one, and using the Sγ1,γ2 notation, gives +Sγ1,γ2 = (x1 − y1)Hγ1,γ2(x)Hγ1,γ2(y) + γ1Sγ1−1,γ2 +(B.8) +23 + +and hence +Sγ1,γ2 +γ1!γ2! = (x1 − y1)Hγ1,γ2(x)Hγ1,γ2(y) +γ1!γ2 ++ +Sγ1−1,γ2 +(γ1 − 1)!γ2!. +Iterating this recursive relationship γ1 − 1 times, we get +Sγ1,γ2 +γ1!γ2! = (x1 − y1) +γ1−1 +� +j=0 +Hγ1−j,γ2(x)Hγ1−j,γ2(y) +(γ1 − j)!γ2! ++ S0,γ2 +0!γ2! . +(B.9) +Now, we have +S0,γ2 = H1,γ2(x)H0,γ2(y) − H1,γ2(y)H0,γ2(x) += H1(x1)Hγ2(x2)Hγ2(y2) − H1(y1)Hγ2(y2)Hγ2(x2) += (x1 − y1)Hγ2(x2)Hγ2(y2) += (x1 − y1)H0,γ2(x)H0,γ2(y). +(B.10) +Therefore, +S0,γ2 +0!γ2! = (x1 − y1)Hγ1−j,γ2(x)Hγ1−j,γ2(y) +(γ1 − j)!γ2! +, +j = γ1 +so (B.9) can be written as +Sγ1,γ2 +γ1!γ2! = (x1 − y1) +γ1 +� +j=0 +Hγ1−j,γ2(x)Hγ1−j,γ2(y) +(γ1 − j)!γ2! +and hence +1 +x1 − y1 +� +γ1+γ2=k−1 +Sγ1,γ2 +γ1!γ2! = +� +γ1+γ2=k−1 +γ1 +� +j=0 +Hγ1−j,γ2(x)Hγ1−j,γ2(y) +(γ1 − j)!γ2! += +� +γ1+γ2≤k−1 +Hγ1,γ2(x)Hγ1,γ2(y) +γ1!γ2! += K(x, y), +(B.11) +using the observation that +{(γ1 − j, γ2) : γ1 + γ2 = k − 1, 0 ≤ j ≤ γ1} += {(˜γ1, γ2) : ˜γ1 + γ2 ≤ k − 1}. +(B.12) +Substituting back in the definition of Sγ1,γ2 gives the desired result. +B.1 +Hermite Series Remainder in Tensor Form +Using (B.1), it is difficult to obtain an upper bound on |rk(x)|, since we need to sum over all γ of order k −1. +In this section, we obtain a more compact representation of rk in terms of a scalar product of k-tensors. +We then take advantage of a very useful representation of the tensor of order-k Hermite polynomials, as an +expectation of a vector outer product. This allows us to bound the scalar product in the rk formula in terms +of an operator norm rather than a Frobenius norm (the latter would incur larger d dependence). +First, let us put all the unique kth order Hermite polynomials into a tensor of dk entries, some of which +are repeating, enumerated by multi-indices α = (α1, . . . , αk) ∈ [d]k. Here, [d] = {1, . . . , d}. We do so as +follows: given α ∈ [d]k, define γ(α) = (γ1(α), . . . , γd(α)) by +γj(α) = +k +� +ℓ=1 +1{αℓ = j}, +i.e. γj(α) counts how many times index j appears in α. For this reason, we use the term counting index to +denote indices of the form γ = (γ1, . . . , γd) ∈ {0, 1, 2, . . . }d, whereas we use the standard term “multi-index” +24 + +to refer to the α’s. Note that we automatically have |γ(α)| = k if α ∈ [d]k. Now, for x ∈ Rd, define H0(x) = 1 +and Hk(x), k ≥ 1 as the tensor +Hk(x) = {Hγ(α)(x)}α∈[d]k, +x ∈ Rd. +When enumerating the entries of Hk, we write H(α) +k +to denote Hγ(α). Note that for each γ with |γ| = k, +there are +�k +γ +� +α’s such that γ(α) = γ. +Example B.1. Consider the α = (i, j, j, k, k, k) entry of the tensor H6(x), where i, j, k ∈ [d] are all distinct. +We count that i occurs once, j occurs twice, and k occurs thrice. Thus +H(i,j,j,k,k,k) +6 +(x) = H1(xi)H2(xj)H3(xk) = xi(x2 +j − 1)(x3 +k − 3xk). +The first two tensors H1, H2 can be written down explicitly. For the entries of H1, we simply have H(i) +1 (x) = +H1(xi) = xi, i.e. H1(x) = x. For the entries of H2, we have H(i,i) +2 +(x) = H2(xi) = x2 +i − 1 and H(i,j) +2 +(x) = +H1(xi)H1(xj) = xixj, i ̸= j. Thus H2(x) = xxT − Id. +We now group the terms in the Hermite series expansion (A.4) based on the order |γ|. Consider all γ in +the sum such that |γ| = k. We claim that +� +|γ|=k +1 +γ!cγ(f)Hγ(x) = 1 +k! +� +α∈[d]k +cγ(α)(f)H(α) +k +(x). +(B.13) +Indeed, for a fixed γ such that |γ| = k, there are +�k +γ +� +α’s in [d]k for which γ(α) = γ, and the summands in +the right-hand sum corresponding to these α’s are all identical, equalling cγ(f)Hγ(x). Thus we obtain +�k +γ +� +copies of cγ(f)Hγ(x), and it remains to note that +�k +γ +� +/k! = 1/γ!. +Analogously to Hk(x), define the tensor ck ∈ (Rd)⊗k, whose α’th entry is +c(α) +k += cγ(α) = E [f(Z)Hγ(α)(Z)] = E [f(Z)H(α) +k +(Z)]. +We then see that the sum (B.13) can be written as +1 +k!⟨ck, Hk(x)⟩, and hence the series expansion of f can +be written as +f(x) = +∞ +� +k=0 +1 +k!⟨ck(f), Hk(x)⟩, +ck(f) := E [f(Z)Hk(Z)]. +(B.14) +The main result of this section is Lemma B.4 below, in which we express rk in terms of a tensor scalar +product. However, let us first prove the following lemma, which is needed to bound the term ⟨u ⊗ c3 ⊗ +c4, E [Z ⊗ H3 ⊗ H4]⟩ in the preliminary bound (4.3). +Lemma B.3. Let cp, cp+1 be symmetric tensors in (Rd)p and (Rd)p+1, respectively.Then +⟨u ⊗ cp ⊗ cp+1, E [Z ⊗ Hp(Z) ⊗ Hp+1(Z)]⟩ = (p + 1)!⟨u ⊗ cp, cp+1⟩. +(B.15) +Proof. Let T = E [Z ⊗ Hp ⊗ Hp+1]. First ,we characterize the non-zero entries of T using the counting index +notation. In counting index notation, a typical entry of T takes the form E [ZiHγ(Z)Hγ′(Z)], where i ∈ [d], +|γ| = p, and |γ′| = p + 1. Now, +E [ZiHγ(Z)Hγ′(Z)] = E [ZiHγi(Zi)Hγ′ +i(Zi)] +� +j̸=i +E [Hγj(Zj)Hγ′ +j(Zj)] += E [ZiHγi(Zi)Hγ′ +i(Zi)] +� +j̸=i +δγj,γj′ γj! +(B.16) +For this to be nonzero, we must have γj = γj′ for all j ̸= i. But since |γ| = p and |γ′| = p + 1, it follows that +we must have γ′ +i = γi + 1.Hence γ′ = γ + ei, where ei is the ith unit vector. To summarize, Ti,γ,γ′ is only +25 + +nonzero if γ′ = γ + ei. In this case, we have +E [ZiHγ(Z)Hγ+ei(Z)] = E [ZiHγi(Zi)Hγi+1(Zi)] +� +j̸=i +γj! += (γi + 1)! +� +j̸=i +γj! = (γ + ei)!. +(B.17) +To get the second line we used the following recurrence relation for 1-d Hermite polynomials: xHk(x) = +Hk+1(x) + kHk−1(x) for all k ≥ 1. Now, we take the inner product (B.15) using counting index notation, +recalling that each γ such that |γ| = p shows up in the tensor Hp exactly p!/γ! times: +� +u ⊗ cp ⊗ cp+1, E [Z ⊗ Hp(Z) ⊗ Hp+1(Z)] +� += +d +� +i=1 +� +|γ|=p +� +|γ′|=p+1 +p! +γ! +(p + 1)! +(γ′)! +uicγcγ′E [ZiHγ(Z)Hγ′(Z)] += +d +� +i=1 +� +|γ|=p +p! +γ! +(p + 1)! +(γ + ei)!uicγcγ+ei(γ + ei)! += (p + 1)! +d +� +i=1 +� +|γ|=p +p! +γ!uicγcγ+ei += (p + 1)! +d +� +i=1 +d +� +j1,...,jp=1 +uicγ(j1,...,jp)cγ(i,j1,...,jp) += (p + 1)! +d +� +i=1 +d +� +j1,...,jp=1 +uic(j1,...,jp) +p +c(i,j1,...,jp) +p+1 += ⟨u ⊗ cp, cp+1⟩ +(B.18) +Lemma B.4. Let f satisfy the assumptions of Lemma B.1, and additionally, assume f ∈ Ck. Then the +remainder rk, given as (B.1) in Lemma B.1, can also be written in the form +rk(x) = +� 1 +0 +(1 − t)k−1 +(k − 1)! E +�� +∇kf ((1 − t)Z + tx) , Hk(x) − Z ⊗ Hk−1(x) +�� +(B.19) +Proof. Recall that ∂γ := ∂γ1 +1 . . . ∂γd +d , and that +Hγ(z)e−∥z∥2/2 = (−1)|γ|∂γ(e−∥z∥2/2). +We then have for |γ| = k − 1, +E [∂if((1 − t)Z + tx)Hγ(Z)] = (1 − t)k−1E [∂γ+eif], +E [∂if((1 − t)Z + tx)Hγ+ei(Z)] = (1 − t)k−1E [∂γ+eifZi], +(B.20) +using the fact that f ∈ Ck. We omitted the argument (1 − t)Z + tx from the right-hand side for brevity. +To get the second equation, we moved only γ of the γ + ei derivatives from e−∥z∥2/2 onto ∂if, leaving +−∂i(e−∥z∥2/2) = zi. Substituting these two equations into (B.1), we get +E [(f(x) − f(Z))K(x, Z)] += +� 1 +0 +(1 − t)k−1 +d +� +i=1 +� +|γ|=k−1 +1 +γ!E [(∂γ+eif) (Hγ+ei(x) − ZiHγ(x))]dt += +1 +(k − 1)! +� 1 +0 +(1 − t)k−1 +d +� +i=1 +� +|γ|=k−1 +�k − 1 +γ +� +E [(∂γ+eif) (Hγ+ei(x) − ZiHγ(x))]dt +(B.21) +26 + +Now, define the sets +A = {(i, γ + ei) : i = 1, . . . , d, γ ∈ {0, 1, . . . }d, |γ| = k − 1}, +B = {(i, ˜γ) : ˜γ ∈ {0, 1, . . . }d, |˜γ| = k, ˜γi ≥ 1}. +(B.22) +It is straightforward to see that A = B. Therefore, +d +� +i=1 +� +|γ|=k−1 +�k − 1 +γ +� +E [∂γ+eif]Hγ+ei(x) += +� +|˜γ|=k +� +i: ˜γi≥1 +� k − 1 +˜γ − ei +� +E [∂˜γf]H˜γ(x) += +� +|˜γ|=k +� +i: ˜γi≥1 +�k +˜γ +� ˜γi +k E [∂˜γf]H˜γ(x) += +� +|˜γ|=k +�k +˜γ +� +E [∂˜γf]H˜γ(x) = ⟨E [∇kf], Hk(x)⟩ +(B.23) +Next, note that +� +|γ|=k−1 +�k − 1 +γ +� +∂γ∂ifHγ(x) = ⟨∇k−1∂if, Hk−1(x)⟩, +and therefore +d +� +i=1 +� +|γ|=k−1 +�k − 1 +γ +� +E [(∂γ+eif)Zi]Hγ(x) = E [⟨∇kf, Z ⊗ Hk−1(x)⟩]. +(B.24) +Substituting (B.23) and (B.24) into (B.21) gives +rk(x) = E [(f(x) − f(Z))K(x, Z)] += +� 1 +0 +(1 − t)k−1 +(k − 1)! E +�� +∇kf ((1 − t)Z + tx) , Hk(x) − Z ⊗ Hk−1(x) +�� +(B.25) +In the next section, we obtain a pointwise upper bound on |rk(x)| in the case f = ¯W. In order for this +bound to be tight in its dependence on d, we need a supplementary result on inner products with Hermite +tensors. To motivate this supplementary result, consider bounding the inner product in (B.19) by the product +of the Frobenius norms of the tensors on either side. As a rough heuristic, ∥∇kf∥F ∼ dk/2∥∇kf∥, where +recall that ∥∇kf∥ is the operator norm of ∇kf. Therefore, we would prefer to bound the inner product in +terms of ∥∇kf∥ to get a tighter dependence on d. Apriori, however, this seems impossible, since Hk(x) is not +given by an outer product of k vectors. But the following representation of the order k Hermite polynomials +will make this possible. +Hk(x) = E [(x + iZ)⊗k], +(B.26) +where Z ∼ N(0, Id). Using (B.26), we can bound scalar products of the form ⟨∇kf, Hk(x)⟩ and ⟨∇kf, Z ⊗ +Hk−1(x)⟩ in terms of the operator norm of ∇kf. More generally, we have the following lemma. +Lemma B.5. Let T ∈ (Rd)⊗k be a k-tensor, and v ∈ Rd. Then for all 0 ≤ ℓ ≤ k, we have +|⟨T, v⊗ℓ ⊗ Hk−ℓ(x)⟩| ≲ ∥T∥∥v∥ℓ(∥x∥k−ℓ + d +k−ℓ +2 ). +Proof. Using (B.26), we have +⟨T, v⊗ℓ ⊗ Hk−ℓ(x)⟩ = E [⟨T, v⊗ℓ ⊗ (x + iZ)⊗(k−ℓ)⟩] +27 + +and hence +|⟨T, v⊗ℓ ⊗ Hk−ℓ(x)⟩| ≤ E |⟨T, v⊗ℓ ⊗ (x + iZ)⊗k−ℓ⟩| +≤ ∥T∥∥v∥ℓE +� +∥x + iZ∥k−ℓ� +≲ ∥T∥∥v∥ℓ(∥x∥k−ℓ + +√ +d +k−ℓ). +(B.27) +B.2 +Hermite-Related Proofs from Section 4.2 +In this section, we return to the setting in the main text. +We let W satisfy all the assumptions from +Section 3.2, m ∈ Rd, σ ∈ Rd×d be such that ∥m∥, ∥σ∥ ≲ 1, and ¯W(x) = W(m + σx). Also, let +rk(x) = ¯W(x) − +k−1 +� +j=0 +1 +j! +� +cj( ¯W), Hj(x) +� +, +(B.28) +where cj( ¯W) = E [ ¯W(Z)Hj(Z)] as usual. +Combining Lemmas B.4 and B.5 allows us to upper bound quantities of the form E [|rk(Y )|p] in terms of +the operator norm of ∇kW. +Corollary B.1. Let rk be as in (B.28), the remainder of the Hermite series expansion of ¯W, where k = 3 +if W ∈ C3 and k = 4 if W ∈ C4. Let Y ∈ Rd be a random variable such that E [∥Y ∥s] < ∞ for all +0 ≤ s ≤ 2pk + 2pq, where q is from Assumption W2. Then +E [|rk(Y )|p] ≲ +� +dk +N k−2 +� p +2 �� +E ∥Y/ +√ +d∥2kp + +� +E ∥Y/ +√ +d∥2(k−1)p + 1 +� +× +� +1 + +� +E +� +∥Y/ +√ +d∥2pq�� +(B.29) +Proof. Let ∇k ¯W be shorthand for ∇k ¯W((1 − t)Z + tY ). Using (B.19) for f = ¯W, we have +|rk(Y )| ≲ +� 1 +0 +E Z +���� +∇k ¯W, Hk(Y ) − Z ⊗ Hk−1(Y ) +���� +dt. +(B.30) +Raising this inequality to the pth power and applying Jensen’s inequality twice, we have +|rk(Y )|p ≲ +� 1 +0 +E Z +���� +∇k ¯W, Hk(Y ) − Z ⊗ Hk−1(Y ) +���p� +dt. +(B.31) +We now take the Y -expectation of both sides, and we are free to assume Y is independent of Z. Note +that the integrand on the right-hand side can be bounded by a∥Y ∥p(q+k) + b for some a and b, since +∥∇k ¯W((1 − t)Z + tY )∥ ≲ (1 + ∥Z∥ + ∥Y ∥)q by Assumption W2, and since the tensors Hk(Y ), Hk−1(Y ) are +made up of at most order k polynomials of Y . Since E [∥Y ∥pq+pk] < ∞ by assumption, we can bring the +Y -expectation inside the integral. Hence +E [|rk(Y )|p] ≲ +� 1 +0 +E +���� +∇k ¯W, Hk(Y ) − Z ⊗ Hk−1(Y ) +���p� +dt, +(B.32) +where the expectation is over both Z and Y . Next, using Lemma B.5 we have +����⟨∇k ¯W, Hk(Y )−Z ⊗ Hk−1(Y )⟩ +���� +p +≲ ∥∇k ¯W∥p � +∥Y ∥kp + d +kp +2 + ∥Z∥p∥Y ∥(k−1)p + ∥Z∥pd +(k−1)p +2 +� +. +(B.33) +28 + +Substituting this into (B.32) we have +E [|rk(Y )|p] ≲ +� 1 +0 +E +���∇k ¯W +��p � +∥Y ∥kp + d +kp +2 + ∥Z∥p∥Y ∥(k−1)p + ∥Z∥pd +(k−1)p +2 +�� +dt +≲ +� 1 +0 +E +� +∥∇k ¯W∥2p� 1 +2 +�� +E ∥Y ∥2kp + d +p +2 +� +E ∥Y ∥2(k−1)p + d +kp +2 +� +dt +≤ d +kp +2 +�� +E ∥Y/ +√ +d∥2kp + +� +E ∥Y/ +√ +d∥2(k−1)p + 1 +� +× +� 1 +0 +E +���∇k ¯W +��2p� 1 +2 dt. +(B.34) +We used Cauchy-Schwarz and the independence of Y and Z to get the second line. Finally, recall that +∇k ¯W = ∇k ¯W((1 − t)Z + tY ) and note that since ∥σ∥ ≲ 1, we have +∥∇k ¯W((1 − t)Z + tY )∥ ≲ ∥∇kW(m + (1 − t)σZ + tσY )∥. +We now apply Lemma 3.4 with Y = m + (1 − t)σZ + tσY . Note that +E +����m + (1 − t)σZ + tσY +��/ +√ +d +�2pq� +≲ 1 + E +� +∥Y/ +√ +d∥2pq� +and hence +E +���∇k ¯W +��2p� 1 +2 ≲ E +���∇kW(m + (1 − t)σZ + tσY ) +��2p� 1 +2 +≲ +� +1 + +� +E +� +∥Y/ +√ +d∥2pq�� +N p(1−k/2) +(B.35) +for all t ∈ [0, 1]. Combining this inequality with (B.34) and noting that dkp/2N p(1−k/2) = (dk/N k−2)p/2 +gives (B.29). +C +Proofs Related to Affine Invariance +Recall the equations +E [∇V (m + S1/2Z)] = 0, +E [∇2V (m + S1/2Z)] = S−1 +(EV ) +and the definition of RV for a measure π ∝ e−V : +RV = +� +(m, S) ∈ Rd×Sd +++ : S ⪯ 2H−1, +∥ +√ +H +√ +S∥2 + ∥ +√ +H(m − m∗)∥2 ≤ 8 +� +, +(C.1) +where m∗ = argminx∈Rd V (x) and H = ∇2V (m∗). +Lemma C.1. Let V2(x) = V1(Ax + b) for some A ∈ Rd×d invertible and b ∈ Rd. Then the pair (m1, S1) is +a unique solution to (EV1) in the set RV1 if and only if the pair (m2, S2) given by +m2 = A−1(m1 − b), +S2 = A−1S1A−T +(C.2) +is a unique solution to (EV2) in the set RV2. +Proof. It suffices to prove the following two statements. (1) If (m1, S1) ∈ RV1 solves (EV1) then (m2, S2) +given by (C.2) lies in RV2 and solves (EV2). (2) If (m2, S2) ∈ RV2 solves (EV2) then (m1, S1) given by +m1 = Am2 + b, S1 = AS2AT lies in RV1 and solves (EV1). +29 + +We prove the first statement, and the second follows by a symmetric argument. So let (m1, S1) ∈ RV1 +solve (EV1). We first show (m2, S2) given by (C.2) solves (EV2). We have +∇V2(x) = AT ∇V1(Ax + b), +∇2V2(x) = AT ∇2V1(Ax + b)A. +(C.3) +Note also that if σ = A−1S1/2 +1 +then σσT = A−1S1A−T . We therefore have +E +� +∇V2 +� +A−1(m1 − b) + +� +A−1S1A−T �1/2 Z +� � += E +� +∇V2 +� +A−1(m1 − b) + A−1S1/2 +1 +Z +�� += AT E +� +∇V1 +� +m1 + S1/2 +1 +Z +�� += 0. +(C.4) +Similarly, +E +� +∇2V2 +� +A−1(m1 − b) + +� +A−1S1A−T �1/2 Z +� � += E +� +∇2V2 +� +A−1(m1 − b) + A−1S1/2 +1 +Z +�� += AT E +� +∇2V1 +� +m1 + S1/2 +1 +Z +�� +A += AT S−1 +1 A = S−1 +2 . +(C.5) +To conclude, we show (m2, S2) ∈ RV2. Let m∗i be the global minimizer of Vi and Hi = ∇2V (m∗i), i = 1, 2. +Then m∗2 = A−1(m∗1 − b) and H2 = AT H1A. Since S1 ⪯ 2H−1 +1 , it follows that +S2 = A−1S1A−T ⪯ 2A−1H−1 +1 A−T = 2H−1 +2 . +Furthermore, direct substitution shows that +∥ +√ +H2(m2 − m∗2)∥2 = (m2 − m∗2)T H2(m2 − m∗2) += (m1 − m∗1)T H1(m1 − m∗1) = ∥ +√ +H1(m1 − m∗1)∥2. +(C.6) +Finally, note that +∥ +� +H2 +� +S2∥2 = ∥ +� +S2H2 +� +S2∥ += sup +u̸=0 +uT √S2H2 +√S2u +∥u∥2 += sup +u̸=0 +uT H2u +∥√S2 +−1u∥2 = sup +u̸=0 +uT H2u +uT S−1 +2 u += sup +u̸=0 +uT H1u +uT S−1 +1 u = ∥ +� +H1 +� +S1∥2. +(C.7) +Therefore, +∥ +� +H2 +� +S2∥2 + ∥ +√ +H2(m2 − m∗2)∥2 = ∥ +� +H1 +� +S1∥2 + ∥ +√ +H1(m1 − m∗1)∥2 ≤ 8. +Recall that +W(x) = nv +� +m∗ + +√ +nH +−1x +� +, +H = ∇2v(m∗), +30 + +and that N = nr, where r is from Assumption V1. The following preliminary calculation will be useful for +showing Assumptions V1, V2 imply Assumptions W1, W2, respectively. Given x ∈ Rd, let +y = √α2 +√ +H +−1x. +We have +√ +N∥∇3W( +√ +Nx)∥ ≤ +√ +N +√nα2 +3 +���∇3(nv) +� +m∗ + +√ +nH +−1√ +Nx +���� += +√r +α2√α2 +���∇3v +� +m∗ + √r +√ +H +−1x +���� += +√r +α2√α2 +����∇3v +� +m∗ + +� r +α2 +y +����� . +(C.8) +Analogously, +N∥∇4W( +√ +Nx)∥ ≤ +N +√nα2 +4 +���∇4(nv) +� +m∗ + +√ +nH +−1√ +Nx +���� += r +α2 +2 +���∇4v +� +m∗ + √r +√ +H +−1x +���� += r +α2 +2 +����∇4v +� +m∗ + +� r +α2 +y +����� . +(C.9) +Lemma C.2. Assumptions V1, V2, and V3 imply Assumptions W1, W2, and W3 with N = nr, where r is +from Assumption V1. +Proof. Let y = √α2 +√ +H +−1x. Note that ∥y∥ ≤ ∥x∥ and in particular, if ∥x∥ ≤ 1 then ∥y∥ ≤ 1. To show +that V1 implies W1, note that by the above calculation we have +√ +N sup +∥x∥≤1 +∥∇3W( +√ +Nx)∥ ≤ +√r +α2√α2 +sup +∥y∥≤1 +����∇3v +� +m∗ + +� r +α2 +y +����� ≤ 1 +2, +(C.10) +as desired. To show that W2 implies V2, fix x ∈ Rd and note that +√ +N∥∇3W( +√ +Nx)∥ ≤ +√r +α2√α2 +����∇3v +� +m∗ + +� r +α2 +y +����� +≤ 1 + ∥y∥q ≤ 1 + ∥x∥q, +(C.11) +as desired. The calculation for the fourth derivative is analogous. +To show that Assumption V3 implies W3, fix ∥x∥ ≥ +√ +N and let y = +√ +nH +−1x, so that W(x) = +nv(y + m∗). Note that ∥y∥ ≥ ∥x∥/√nβ2 ≥ +� +N/(nβ2) = +� +r/β2. Hence we can apply Assumption V3 to +conclude that +W(x) = nv(m∗ + y) ≥ (d + 12q + 36) log(∥ +� +nβ2y∥) +≥ (d + 12q + 36) log(∥ +√ +nHy∥) = (d + 12q + 36) log ∥x∥. +(C.12) +as desired. +D +Logistic Regression Example +Details of Numerical Simulation +For the numerical simulation displayed in Figure 1, we take d = 2 and n = 100, 200, . . . , 1000. For each +n, we draw ten sets of covariates xi, i = 1, . . . , n from N(0, λ2Id) with λ = +√ +5, yielding ten posterior +31 + +distributions πn(· | x1:n). +For each πn we compute the ground truth mean and covariance by directly +evaluating the integrals, using a regularly spaced grid (this is feasible in two dimensions). The mode m∗ of +πn is found by a standard optimization procedure, and the Gaussian VI estimates ˆm, ˆS are computed using +the procedure described in [LCB+22]. We used the authors’ implementation of this algorithm, found at +https://github.com/marc-h-lambert/W-VI. We then compute the Laplace and VI mean and covariance +approximation errors for each n and each of the ten posteriors at a given n. The solid lines in Figure 1 depict +the average approximation errors over the ten distributions at each n. The shaded regions depict the spread +of the middle eight out of ten approximation errors. +Verifying the Assumptions +As discussed in Section 2.3, we make the approximation +v(z) ≈ v∞(z) = −E [Y log s(XT z) + (1 − Y ) log(1 − s(XT z)]. +Here, X ∼ N(0, λ2Id) and Y | X ∼ Bernoulli(s(X1)), since X1 = eT +1 X. +Recall that s is the sigmoid, +s(a) = (1 + ea)−1. Below, the parameters α2, β2, etc. are all computed for the function v∞. +Note that z = e1 is the global minimizer of v∞. We have ∇2v∞(z) = E [s′(XT z)XXT ] and in particular, +∇2v∞(e1) = E [s′(X1)XXT ]. Also, +s′(a) = s(a)(1 − σ(a)) = +1 +2(1 + cosh(a)) ∈ (0, 1/4]. +To lower bound λmin(∇2v∞(e1)), note that for ∥u∥ = 1 we have +uT ∇2v∞(e1)u = E [s′(X1)(XT u)2] += u2 +1E [s′(X1)X2 +1] + λ2 +d +� +j=2 +u2 +jE [s′(X1)] +≥ s′(λ) +� +�u2 +1E [X2 +1{|X1| ≤ λ}] + λ2 +d +� +j=2 +u2 +jP(|X1| ≤ λ) +� +� +≳ λ2s′(λ), +(D.1) +and hence α2 ≳ λ2s′(λ). Using that s′ ≤ 1/4, we also have the upper bound +λmax(∇2v∞(e1)) ≤ λ2 +4 = β2. +(D.2) +Next, we need to upper bound ∥∇3v∞∥. We have +∇3v∞(z) = E [s′′(XT z)X⊗3], +s′′(a) = s(a)(1 − s(a))(1 − 2s(a)), +so that +∥∇3v∞(z)∥ = +sup +∥u1∥=∥u2∥=∥u3∥=1 +E +� +s′′(XT z) +3 +� +k=1 +(uT +k X) +� +. +One can show that s′′(a) ∈ [−1, 1] for all a ∈ R. Hence +E +� +s′′(XT z) +3 +� +k=1 +(uT +k X) +� +≤ E +� 3 +� +k=1 +|uT +k X| +� +≤ +3 +� +k=1 +E +� +|uT +k X|3�1/3 ≤ 2λ3. +(D.3) +Here, we used that uT +k X +d= N(0, λ2), whose third absolute moment is bounded by 2λ3. We therefore get the +bound +∥∇3v∞(z)∥ ≤ β3 := 2λ3. +(D.4) +32 + +Note that this constant bound holds for all z ∈ Rd. Next, we need to find r such that +sup +∥z−m∗∥≤√ +r/α2 +∥∇3v∞(z)∥ ≤ α3/2 +2 +2√r . +Using the uniform bound (D.4) on ∥∇3v∞∥, it suffices to take r such that +β3 = α3/2 +2 +2√r =⇒ r = α3 +2 +4β2 +3 +≳ s′(λ)3. +(D.5) +Finally, we verify Assumption V3. To do so, recall that v∞ is convex. Therefore, if y lies on the line +segment between 0 and z, with ∥y∥ = +� +r/β2 < ∥z∥, then +v∞(m∗ + z) − v∞(m∗) ≥ +∥z∥ +� +r/β2 +(v∞(m∗ + y) − v∞(m∗)) +≥ +� +β2/r +inf +∥y∥=√ +r/β2 +[v∞(m∗ + y) − v∞(m∗)] ∥z∥. +(D.6) +It is clear that if λ is a constant then the parameters in this inequality, as well as the infimum, are lower +bounded by absolute constants. Therefore, since ∥z∥ ≥ log ∥z∥, Assumption V3 is satisfied. +E +Proofs from Section 5 +The proofs in this section rely on tensor-matrix and tensor-vector scalar products. Let us review the rules +of such scalar products, and how to bound the operator norms of these quantities. Let v ∈ Rd, A ∈ Rd×d, +and T ∈ Rd×d×d. We define the vector ⟨T, A⟩ ∈ Rd and the matrix ⟨T, v⟩ ∈ Rd×d by +⟨T, A⟩i = +d +� +j,k=1 +TijkAjk, +i = 1, . . . , d, +⟨T, v⟩ij = +d +� +k=1 +Tijkvk, +i, j = 1, . . . , d. +(E.1) +We will always sum over the last two or last one indices of the tensor. Note that the norm of the matrix +⟨T, v⟩ is given by ∥⟨T, v⟩∥ = sup∥u∥=∥w∥=1 uT ⟨T, v⟩w, and we have +uT ⟨T, v⟩w = +d +� +i,j=1 +uiwj +d +� +k=1 +Tijkvk = ⟨T, u ⊗ w ⊗ v⟩ ≤ ∥T∥∥v∥. +Therefore, ∥⟨T, v⟩∥ ≤ ∥T∥∥v∥. +We also review the notion of operator norm for derivatives of a function, and note the distinction between +this kind of operator norm and the standard tensor operator norm. Specifically, consider a C2 function +f = (f1, . . . , fd) : Rd×d ×Rd → Rd, where Rd×d is endowed with the standard matrix norm. Then ∇σf(σ, m) +is a linear functional from Rd×d to Rd, and we let ⟨∇σf(σ, m), A⟩ ∈ Rd denote the application of ∇σf(σ, m) +to A. Note that we can represent ∇σf by the d × d × d tensor (∇σjkfi)d +i,j,k=1, so that ⟨∇σf(σ, m), A⟩ +coincides with the definition given above of tensor-matrix scalar products. However, ∥∇σf∥op is not the +standard tensor operator norm. Rather, +∥∇σf∥op = +sup +A∈Rd×d,∥A∥=1 +∥⟨∇σf, A⟩∥ = +sup +A∈Rd×d,∥A∥=1, +u∈Rd,∥u∥=1 +⟨∇σf, A ⊗ u⟩. +We continue to write ∥∇σf∥ to denote the standard tensor operator norm, i.e. +∥∇σf∥ = +sup +u,v,w∈Rd, +∥u∥=∥v∥=∥w∥=1 +⟨∇σf, u ⊗ v ⊗ w⟩. +33 + +Note also that ∇mf ∈ Rd×d is a matrix, and that +max +� +∥∇σf(σ, m)∥op , ∥∇mf(σ, m)∥op +� +≤ ∥∇f(σ, m)∥op ≤ ∥∇σf(σ, m)∥op + ∥∇mf(σ, m)∥op. +(E.2) +Finally, recall the notation +Br(0, 0) = {(σ, m) ∈ Rd×d × Rd : ∥σ∥2 + ∥m∥2 ≤ r2}, +Br = {σ ∈ Rd×d : ∥σ∥ ≤ r}, +Sc1,c2 = {σ ∈ Sd ++ : c1Id ⪯ σ ⪯ c2Id}. +(E.3) +Lemma E.1. Let f = (f1, . . . , fd) : Rd×d×Rd → Rd be C3, where Rd×d is the set of d×d matrices, endowed +with the standard matrix operator norm. Suppose f(0, 0) = 0, ∇σf(0, 0) = 0, ∇mf(σ, m) is symmetric for +all m, and ∇mf(0, 0) = Id. Let r > 0 be such that +sup +(σ,m)∈Br(0,0) +∥∇f(σ, m) − ∇f(0, 0)∥op ≤ 1 +4. +(E.4) +Then for each σ ∈ Rd×d such that ∥σ∥ ≤ r/2 there exists a unique m = m(σ) ∈ Rd such that f(σ, m(σ)) = 0 +and (σ, m(σ)) ∈ Br(0, 0). Furthermore, the map σ �→ m(σ) is C2, with +1 +2Id ⪯ ∇mf(σ, m) +�� +m=m(σ) ⪯ 3 +2Id, +∥∇σm(σ)∥op ≤ 1. +(E.5) +The proof uses the following lemma. +Lemma E.2 (Lemma 1.3 in Chapter XIV of [Lan93]). Let U be open in a Banach space E, and let f : U → E +be of class C1. Assume that f(0) = 0 and f ′(0) = I. Let r > 0 be such that ¯Br(0) ⊂ U. If +|f ′(z) − f ′(x)| ≤ s, +∀z, x ∈ ¯Br(0) +for some s ∈ (0, 1), then f maps ¯Br(0) bijectively onto ¯B(1−s)r(0). +Proof of Lemma E.1. Let φ : Rd×d × Rd → Rd×d × Rd be given by φ(σ, m) = (σ, f(σ, m)), so that φ(0, 0) = +(0, 0), and +∇φ(σ, m) = +� +Id×d +0 +∇σf(σ, m) +∇mf(σ, m) +� +. +(E.6) +For each (σ, m), (σ′, m′) ∈ Br(0, 0), we have +∥∇φ(σ, m) − ∇φ(σ′, m′)∥op = ∥∇f(σ, m) − ∇f(σ′, m′)∥op +≤ 2 +sup +(σ,m)∈Br(0,0) +∥∇f(σ, m) − ∇f(0, 0)∥op ≤ 1 +2. +(E.7) +Note also that ∇φ(0, 0) is the identity. Thus by Lemma E.2, we have that φ is a bijection from Br(0, 0) +to Br/2(φ(0, 0)) = Br/2(0, 0). Now, fix any σ ∈ Rd×d such that ∥σ∥ ≤ r/2. Then (σ, 0) ∈ Br/2(0, 0), and +hence there exists a unique (σ′, m) ∈ Br(0, 0) such that (σ, 0) = φ(σ′, m) = (σ′, f(σ′, m)). Thus σ = σ′ and +f(σ, m) = 0. In other words, for each σ such that ∥σ∥ ≤ r/2 there exists a unique m = m(σ) such that +(σ, m(σ)) ∈ Br(0, 0) and such that 0 = f(σ, m). +The map σ �→ m(σ) is C2 by standard Implicit Function Theorem arguments. To show that the first +inequality of (E.5) holds, note that we have +∥∇mf(σ, m(σ)) − ∇mf(0, 0)∥op ≤ ∥∇f(σ, m(σ)) − ∇f(0, 0)∥op ≤ 1/4 ≤ 1/2 +34 + +by (E.4) since we know that (σ, m(σ)) ∈ Br(0, 0). Thus, +Id = ∇2W(0) = ∇mf(0, 0) +=⇒ 1 +2Id ⪯ ∇mf(σ, m(σ)) ⪯ 3 +2Id. +(E.8) +To show the second inequality of (E.5), we first need the supplementary bound +∥∇σf(σ, m(σ))∥op = ∥∇σf(σ, m(σ)) − ∇σf(0, 0)∥op ≤ 1/2 +(E.9) +which holds by the same reasoning as above. Now, +∂σjkm = −∇mf(σ, m)−1∂σjkf(σ, m) ∈ Rd +by standard Implicit Function Theorem arguments, where ∇mf(σ, m) is a matrix, ∂σjkf(σ, m) is a vector, +and ∇σm, ∇σf are linear maps from Rd×d to Rd. Hence by the first inequality in (E.5) combined with (E.9) +we have +∥∇σm(σ)∥op = sup +∥A∥=1 +∥⟨∇σm(σ), A⟩∥ += sup +∥A∥=1 +∥∇mf(σ, m)−1 +d +� +j,k=1 +∂σjkf(σ, m)Ajk∥ += sup +∥A∥=1 +∥∇mf(σ, m)−1⟨∇σf, A⟩∥ +≤ ∥∇mf(σ, m)−1∥∥∇σf∥op ≤ 2 × 1 +2 = 1. +(E.10) +Lemma E.3. Let f : Rd×d × Rd → Rd be given by f(σ, m) = E [∇W(σZ + m)]. Then all the conditions of +Lemma E.1 are satisfied; in particular, (E.4) is satisfied with r = 2 +√ +2. Thus the conclusions of Lemma E.1 +hold with this choice of r. +Proof. Note that f is C2 thanks to the fact that W is C3 and ∇W grows polynomially by Assumption W2. +We then immediately have f(0, 0) = ∇W(0) = 0, ∇mf(σ, m) = E [∇2W(m + σZ)] is symmetric for all m, σ, +and ∇mf(0, 0) = ∇2W(0) = Id. To show ∇σf(0, 0) = 0, we compute the i, j, k term of this tensor: +∂σjkfi = ∂σjkE [∂iW(m + σZ)] = E [∂2 +ijW(m + σZ)Zk], +so that ∂σjkfi(0, 0) = E [∂2 +i,jW(0)Zk] = 0. It remains to show that for r = 2 +√ +2 we have +sup +(σ,m)∈Br(0,0) +∥∇f(σ, m) − ∇f(0, 0)∥op ≤ 1 +4. +(E.11) +First, note that +sup +(σ,m)∈Br(0,0) +∥∇f(σ, m) − ∇f(0, 0)∥op ≤ r +sup +(σ,m)∈Br(0,0) +∥∇2f(σ, m)∥op, +where ∇2f(σ, m) is a bilinear form on (Rd×d × Rd)2, and we have +∥∇2f(σ, m)∥op ≤ ∥∇2 +σf(σ, m)∥op + 2∥∇σ∇mf(σ, m)∥op + ∥∇2 +mf(σ, m)∥op. +For f(σ, m) = E [∇W(σZ + m)], these second order derivatives are given by +∂2 +mi,mjf(σ, m) = E [∂2 +i,j∇W(m + σZ)], +∂mi∂σjkf(σ, m) = E [∂2 +i,j∇W(m + σZ)Zk], +∂2 +σjk,σℓpf(σ, m) = E [∂2 +j,ℓ∇W(m + σZ)ZkZp], +(E.12) +35 + +each a vector in Rd. From the first line, we get that ∥∇2 +mf(σ, m)∥op ≤ E ∥∇3W(m + σZ)∥, where ∥∇3W∥ +is the standard tensor norm. From the second line, we get +∥∇m∇σf(σ, m)∥op += +sup +∥A∥=1,∥x∥=1 +����E +� +d +� +i,j,k=1 +∂2 +i,j∇W(m + σZ)ZkxiAjk +����� += +sup +∥A∥=1,∥x∥=1 +����E +� +d +� +i,j=1 +∂2 +i,j∇W(m + σZ)xi(AZ)j +����� += +sup +∥A∥=1,∥x∥=1 +����E +�� +∇3W(m + σZ), x ⊗ AZ +������ +≤ +sup +∥A∥=1,∥x∥=1 +E +� +∥x∥∥AZ∥∥∇3W(m + σZ)∥ +� +≤ +√ +d +� +E [∥∇3W(m + σZ)∥2]. +(E.13) +A similar computation gives +∥∇2 +σf(σ, m)∥op ≤ +sup +∥A∥=1,∥B∥=1 +E [∥AZ∥∥BZ∥∥∇3W(m + σZ)∥] +≤ 2d +� +E [∥∇3W(m + σZ)∥2] ≲ d/ +√ +N. +(E.14) +Thus overall we have +∥∇2f(σ, m)∥op ≤ (2d + 2 +√ +d + 1) +� +E [∥∇3W(m + σZ)∥2] +≤ 5d +sup +(σ,m)∈Br(0,0) +� +E [∥∇3W(m + σZ)∥2] +(E.15) +and hence +sup +(σ,m)∈Br(0,0) +∥∇f(σ, m) − ∇f(0, 0)∥op +≤ 5rd +sup +(σ,m)∈Br(0,0) +� +E [∥∇3W(m + σZ)∥2] +≤ 10 +√ +2d +√ +N +( +√ +3 + +� +(2q)!), +(E.16) +where in the last line we applied Lemma E.5 and substituted r = 2 +√ +2. To conclude, recall that ( +√ +3 + +� +(2q)!)/ +√ +N ≤ 1/(40 +√ +2d) by the assumption in the statement of Lemma 5.1. +Lemma E.4. Let r = 2 +√ +2 and σ ∈ S0,r/2 �→ m(σ) ∈ Rd be the restriction to symmetric nonnegative +matrices of the map furnished by Lemmas 5.2 and 5.3 . Then the function F given by +F(σ) = E [∇2W(m(σ) + σZ)]−1/2 +is well-defined and a strict contraction on S0,r/2. Moreover, +F(S0,r/2) ⊆ Sc1,c2 ⊆ S0,r/2, +where c1 = +� +2/3, c2 = +√ +2 = r/2. +Proof. First, let G(σ) = E [∇2W(m(σ) + σZ)] and f(σ, m) = E [∇W(m + σZ)] as in Lemma E.3. Note that +∇mf(σ, m) = E [∇2W(σZ + m)], so that G(σ) = ∇mf(σ, m)|m=m(σ) and hence by (E.5) of Lemma E.1 we +have +1 +2Id ⪯ G(σ) ⪯ 3 +2Id, +∀σ ∈ S0,r/2. +(E.17) +36 + +But then G(σ) has a unique invertible symmetric positive definite square root, and we define F(σ) = G(σ)−1/2 +to be the inverse of this square root. Moreover, using (E.17), it follows that +c1Id ⪯ F(σ) ⪯ c2Id, +∀σ ∈ S0,r/2, +where c1 = +� +2/3 and c2 = +√ +2 = r/2. In other words, F(S0,r/2) ⊆ Sc1,c2 ⊆ S0,r/2. It remains to show F is +a contraction on S0,r/2. Let σ1, σ2 ∈ S0,r/2. We will first bound ∥G(σ1) − G(σ2)∥. We have +∥G(σ1) − G(σ2)∥ ≤ ∥σ1 − σ2∥ +sup +σ∈S0,r/2 +∥∇σG(σ)∥op, +(E.18) +and +∥∇σG(σ)∥op = sup +∥A∥=1 +��� +∇σG(σ), A +��� += sup +∥A∥=1 +����E +� +∇3W, +� +A, ∇σ (m(σ) + σZ) +������. +(E.19) +Here, the quantities inside of the ∥ · ∥ on the right are matrices. Indeed, ⟨∇σG, A⟩ denotes the application +of ∇σG to A. Since G sends matrices to matrices, ∇σG is a linear functional which also sends matrices to +matrices. In the third line, ∇σ(m(σ) + σZ) should be interpreted as a linear functional from Rd×d to Rd, so +⟨A, ∇σ(m(σ) + σZ)⟩ is a vector in Rd, and the inner product of this vector with the d × d × d tensor ∇3W +is a matrix. Using that ∥⟨T, x⟩∥ ≤ ∥T∥∥x∥, as explained at the beginning of this section, we have +���� +� +∇3W, +� +A, ∇σ (m(σ) + σZ) +������ ≤ +��∇3W +�� ∥⟨A, ∇σ (m(σ) + σZ)⟩∥ +≤ ∥∇3W∥∥∇σ(m(σ) + σZ)∥op += ∥∇3W∥∥∇σm(σ) + Z ⊗ Id∥op ≤ ∥∇3W∥(1 + ∥Z∥). +(E.20) +To get the last bound, we used that ∥∇σm(σ)∥op ≤ 1, shown in Lemma E.3. We also use the fact that +∥Z ⊗ Id∥op = sup∥A∥=1 ∥ ⟨A, Z ⊗ Id⟩ ∥ = sup∥A∥=1 ∥AZ∥ = ∥Z∥. (Recall that since Z ⊗ Id is part of ∇σm, +we are considering Z ⊗ Id as an operator on matrices rather than as a d × d × d tensor, and this is why we +take the supremum over matrices A.) +Substituting (E.20) back into (E.18), we get +∥G(σ1) − G(σ2)∥ ≤ ∥σ1 − σ2∥ +sup +σ∈S0,r/2 +E +���∇3W (m(σ) + σZ) +�� (1 + ∥Z∥) +� +≤ ∥σ1 − σ2∥ +√ +2(1 + +√ +d) +√ +3 + +� +(2q)! +√ +N +≤ ∥σ1 − σ2∥1 + +√ +d +40d +. +(E.21) +The second inequality is by Cauchy-Schwarz and Lemma E.5 below. The third inequality uses that ( +√ +3 + +� +(2q)!)/ +√ +N ≤ 1/(40 +√ +2d), by the assumption in the statement of Lemma 5.1. Now, note that thanks to +Lemma E.3, both λmin(G(σ1)) and λmin(G(σ2)) are bounded below by 1/2. Using Lemma E.6, we therefore +have +∥F(σ1) − F(σ2)∥ ≤ +√ +2∥G(σ1) − G(σ2)∥ +≤ 1 + +√ +d +20 +√ +2d ∥σ1 − σ2∥. +(E.22) +Hence F is a strict contraction. +Lemma E.5. Assume 4 +√ +2 ≤ +� +N/d. Then +sup +(σ,m)∈B2 +√ +2(0,0) +E [∥∇3W(m + σZ)∥2] ≤ (3 + (2q)!)/N. +37 + +Proof. Fix ∥m∥, ∥σ∥ ≤ 2 +√ +2, so that 2∥m∥ ≤ +√ +N and 2∥σ∥ +√ +d ≤ +√ +N. By Assumption W2, we have +NE [∥∇3W(m + σZ)∥2] ≤ 2 + 2E +� +∥(m + σZ)/ +√ +N∥2q� +≤ 2 + 22q +��∥m∥ +√ +N +�2q ++ +� ∥σ∥ +√ +N +�2q +E [∥Z∥2q] +� +≤ 2 + +�2∥m∥ +√ +N +�2q ++ +� +2∥σ∥ +√ +d +√ +N +�2q +(2q)! +≤ 3 + (2q)!. +(E.23) +Lemma E.6. Let A0 and A1 be psd, and A1/2 +0 +, A1/2 +1 +their unique psd square roots. Assume without loss of +generality that λmin(A0) ≤ λmin(A1). Then +∥A−1/2 +1 +− A−1/2 +0 +∥ ≤ +∥A1 − A0∥ +2λmin(A0)3/2 . +Proof. First note that +A−1/2 +1 +− A−1/2 +0 += A−1/2 +1 +(A1/2 +0 +− A1/2 +1 +)A−1/2 +0 +and hence +∥A−1/2 +1 +− A−1/2 +0 +∥ ≤ ∥A−1/2 +1 +∥∥A−1/2 +1 +∥∥A1/2 +1 +− A1/2 +0 +∥ ≤ ∥A1/2 +1 +− A1/2 +0 +∥ +λmin(A0) +. +Now, define At = A0 + t(A1 − A0) and let Bt = A1/2 +t +, where Bt is the unique psd square root of At. We +then have ∥A1/2 +1 +− A1/2 +0 +∥ ≤ supt∈[0,1] ∥ ˙Bt∥. We will now express ˙Bt in terms of ˙At and Bt. Differentiating +B2 +t = At, we get +Bt ˙Bt + ˙BtBt = ˙At = A1 − A0. +(E.24) +Now, one can check that the solution ˙Bt to this equation is given by +˙Bt = +� ∞ +0 +e−sBt(A1 − A0)e−sBtds +and hence +∥ ˙Bt∥ ≤ ∥A1 − A0∥ +� ∞ +0 +∥e−sBt∥2dt = ∥A1 − A0∥ +2λmin(Bt) = ∥A1 − A0∥ +2 +� +λmin(At) +. +Now note that λmin(At) ≥ λmin(A0), since At is just a convex combination of A0 and A1. Hence ∥ ˙Bt∥ ≤ +∥A1 − A0∥/2 +� +λmin(A0) for all t ∈ [0, 1]. Combining all of the above estimates gives +∥A−1/2 +1 +− A−1/2 +0 +∥ ≤ +∥A1 − A0∥ +2λmin(A0)3/2 . +References +[AR20] +P. Alquier and J. Ridgway. Concentration of tempered posteriors and of their variational approx- +imations. The Annals of Statistics, 48(3):1475–1497, 2020. +[BKM17] +D. M. Blei, A. Kucukelbir, and J. D. McAuliffe. Variational inference: A review for statisticians. +Journal of the American Statistical Association, 112(518):859–877, 2017. +[CC96] +M. K. Cowles and B. P. Carlin. Markov chain Monte Carlo convergence diagnostics: A compar- +ative review. Journal of the American Statistical Association, 91(434):883–904, 1996. +38 + +[DD21] +K. Daudel and R. Douc. Mixture weights optimisation for alpha-divergence variational inference. +In Advances in Neural Information Processing Systems, volume 34, pages 4397–4408, 2021. +[DDP21] +K. Daudel, R. Douc, and F. Portier. +Infinite-dimensional gradient-based descent for alpha- +divergence minimisation. The Annals of Statistics, 49(4):2250–2270, 2021. +[HY19] +W. Han and Y. Yang. +Statistical inference in mean-field variational Bayes. +arXiv preprint +arXiv:1911.01525, 2019. +[Kat23] +A. Katsevich. The dimension dependence of the Laplace approximation. In preparation, 2023. +[KGB22] +M. J. Kasprzak, R. Giordano, and T. Broderick. How good is your Gaussian approximation of +the posterior? Finite-sample computable error bounds for a variety of useful divergences. arXiv +preprint arXiv:2209.14992, 2022. +[Lan93] +S. Lang. Real and Functional Analysis. Graduate Texts in Mathematics. Springer New York, NY, +3 edition, 1993. +[LCB+22] M. Lambert, S. Chewi, F. Bach, S. Bonnabel, and P. Rigollet. Variational inference via Wasser- +stein gradient flows. arXiv preprint arXiv:2205.15902, 2022. +[Leb72] +N. Lebedev. Special Functions and Their Applications,. Dover Publications, 1972. +[Spo22] +V. Spokoiny. Dimension free non-asymptotic bounds on the accuracy of high dimensional Laplace +approximation. arXiv preprint arXiv:2204.11038, 2022. +[VdV00] +A. W. Van der Vaart. Asymptotic statistics, volume 3. Cambridge university press, 2000. +[WB19] +Y. Wang and D. M. Blei. Frequentist consistency of variational Bayes. Journal of the American +Statistical Association, 114(527):1147–1161, 2019. +[ZG20] +F. Zhang and C. Gao. Convergence rates of variational posterior distributions. The Annals of +Statistics, 48(4):2180–2207, 2020. +39 + diff --git a/9dA0T4oBgHgl3EQfO_9E/content/tmp_files/load_file.txt b/9dA0T4oBgHgl3EQfO_9E/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..abc309794cf7d6c3668899bf04dcc24d282cf975 --- /dev/null +++ b/9dA0T4oBgHgl3EQfO_9E/content/tmp_files/load_file.txt @@ -0,0 +1,1601 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf,len=1600 +page_content='On the Approximation Accuracy of Gaussian Variational Inference Anya Katsevich akatsevi@mit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='edu Philippe Rigollet rigollet@math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='mit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='edu January 6, 2023 Abstract The main quantities of interest in Bayesian inference are arguably the first two moments of the posterior distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In the past decades, variational inference (VI) has emerged as a tractable approach to approximate these summary statistics, and a viable alternative to the more established paradigm of Markov Chain Monte Carlo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' However, little is known about the approximation accuracy of VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In this work, we bound the mean and covariance approximation error of Gaussian VI in terms of dimension and sample size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Our results indicate that Gaussian VI outperforms significantly the classical Gaussian approximation obtained from the ubiquitous Laplace method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Our error analysis relies on a Hermite series expansion of the log posterior whose first terms are precisely cancelled out by the first order optimality conditions associated to the Gaussian VI optimization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 1 Introduction A central challenge in Bayesian inference is to sample from, or compute summary statistics of, a posterior distribution π on Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The classical approach to sampling is Markov Chain Monte Carlo (MCMC), in which a Markov chain designed to converge to π is simulated for sufficiently long time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' However, MCMC can be expensive, and it is notoriously difficult to identify clear-cut stopping criteria for the algorithm [CC96].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Besides, if one is only interested in summary statistics of π such as the mean and covariance, then generating samples from π may not be the most efficient way to achieve this goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' An alternative, often computationally cheaper, approach is variational inference (VI) [BKM17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The idea of VI is to find, among all measures in a certain parameterized family P, the closest measure to π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' While various measures of proximity have been proposed since the introduction of VI [DD21, DDP21], we employ here KL divergence, which is, by far, the most common choice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Typically, statistics of interest, chiefly its first two moments, for measures in the family P are either readily available or else easily computable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In this work, we consider the family of normal distributions, which are directly parameterized by their mean and covariance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We define ˆπ = N( ˆm, ˆS) ∈ argmin p∈PGauss KL( p ∥ π), (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) and take ˆm, ˆS as our estimates of the true mean mπ and covariance Sπ of π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' PGauss denotes the family of non-degenerate Gaussian distributions on Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' A key difference between MCMC and VI is that unbiased MCMC algorithms yield arbitrarily accurate samples from π if they are run for long enough.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' On the other hand, the output of a perfect VI algorithm is ˆπ, which is itself only an approximation to π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore, a fundamental question in VI is to understand the quality of the approximation ˆπ ≈ π, particularly in terms of the statistics of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In this work, we bound the mean and covariance estimation errors ∥ ˆm − mπ∥ and ∥ ˆS − Sπ∥ for the Gaussian VI estimate (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Of course, we cannot expect an arbitrary, potentially multimodal π to be well-approximated by a Gaussian distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In the setting of Bayesian inference, however, the Bernstein-von Mises theorem guarantees that under certain regularity conditions, a posterior distribution converges to a Gaussian density in the limit of large sample size [VdV00, Chapter 10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To understand why this is the case, consider a generic posterior 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='02168v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='ST] 5 Jan 2023 π = πn with density of the form πn(θ | x1:n) ∝ ν(θ) n � i=1 pθ(xi) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2) Here, ν is the prior, pθ is the model density, and x1:n = x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , xn are i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='d observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Provided ν and pθ are everywhere positive, we can write πn as πn(θ) ∝ e−nvn(θ), vn(θ) := − 1 n n � i=1 log pθ(xi) − 1 n log ν(θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' If n is large and vn has a strict global minimum at θ = m∗, then πn will place most of its mass in a neighborhood of m∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In other words, πn is effectively unimodal, and hence a Gaussian approximation is reasonable in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This reasoning drives a second, so-called Laplace approximation to πn, which is a Gaussian centered at m∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Hence, the mode m∗ can also serve as an approximation to the true mean mπn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' However, as we discuss below, Gaussian VI yields a more accurate estimate of mπn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Main contributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Our main result quantifies the mean and covariance estimation errors of Gaussian VI for a target measure πn ∝ e−nvn, in terms of sample size n and dimension d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In line with the above reasoning, the key assumption is that vn has a unique global minimizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' It is useful at this point to think of vn as being a quantity of order 1, and for the purpose of readability, we write simply vn = v in the rest of this introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' It is easy to see that πn ∝ e−nv has variance of order 1/n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To account for this vanishing variance, we rescale the approximation errors appropriately in the statement of the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let πn ∝ exp(−nv) have mean and covariance mπn, Sπn respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Assume that d3 ≤ n and that v ∈ C4(Rd) has a unique strict minimum at m∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' If ∥∇3v∥ and ∥∇4v∥ grow at most polynomially away from m∗, and if v grows at least logarithmically away from m∗, then the mean and covariance ˆmn, ˆSn of the variational Gaussian approximation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) to π satisfy √n∥ ˆmn − mπn∥ ≲ �d3 n �3/2 n∥ ˆSn − Sπn∥ ≲ d3 n , (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3) Here, ≲ means the inequalities hold up to an absolute (d, n-independent) constant, as well as a factor depending on second and third order derivatives of v in a neighborhood of the mode m∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This v-dependent factor is made explicit in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The theorem shows that both the mean and covariance VI estimates, and especially the mean estimate ˆmn, are remarkably accurate approximations to the true mean and covariance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' As such, it is a compelling endorsement of Gaussian VI for estimating the posterior mean and covariance in the finite sample regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Although the condition n ≥ d3 is restrictive when d is very large, we believe that it is unavoidable without further assumptions and note that it also appears in existing bounds for the Laplace method [Spo22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' As mentioned above, the Laplace method is a competing Gaussian approximation to πn that is widespread in practice for its computational simplicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We use it as a benchmark to put the above error bounds into context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The Laplace approximation to πn ∝ e−nv is given by πn ≈ N(m∗, (n∇2v(m∗))−1), where m∗ is the global minimizer of v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This approximation simply replaces v by its second order Taylor expansion around m∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The recent works [Spo22] and [KGB22] derive error bounds for the Laplace approxima- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Spokoiny [Spo22] shows that √n∥m∗ − mπn∥ ≲ (d3/n)1/2 assuming v is strongly convex, and [KGB22] similarly shows that √n∥m∗ − mπn∥ ≲ 1/√n with implicit dependence on d, under weaker assumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For the covariance approximation, an explicit error bound is stated only in [KGB22];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' the authors show that n∥(n∇2v(m∗))−1 − Sπn∥ ≲ 1/√n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Meanwhile, Spokoiny states lemmas in the appendix from which one can derive a d-dependent covariance error bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In a companion paper [Kat23], we extend the techniques developed in the present work to obtain the following tighter n dependence of the Laplace covariance error: n∥(n∇2v(m∗))−1 − Sπn∥ ≲ 1/n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4) 2 This n dependence can also be obtained using the approach in [Spo22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let us summarize the n-dependence of these bounds, incorporating the 1/√n and 1/n scaling of the mean and covariance errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The Gaussian VI mean approximation error is n−1/2 × n−3/2, which is a factor of n−1 more accurate than the Laplace mean error of n−1/2 × n−1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The covariance approximation error is the same for both methods (using the tighter covariance bound (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4)): n−1 × n−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' VI’s improved mean approximation accuracy is confirmed in our simulations of a simple Bayesian logistic regression example in d = 2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' see Figure 1, and Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3 for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Figure 1: Gaussian VI yields a more accurate mean estimate than does Laplace, while the two covariance estimates are on the same order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Here, πn is the likelihood of logistic regression given n observations in dimension d = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For the left-hand plot, the slopes of the best-fit lines are −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='04 for the Laplace approximation and −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='02 for Gaussian VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For covariance: the slopes of the best-fit lines are -2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='09 for Laplace, -2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='12 for VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We note that the Laplace approximation error bounds in the companion work [Kat23] are also tighter in their dimension dependence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' First-order optimality conditions and Hermite series expansions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The improvement of Gaussian VI over the Laplace method to estimate the mean a posteriori rests on a remarkable interaction between first-order optimality conditions and a Hermite series expansion of the potential v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Hereafter, we replace θ by x and let V = nv, π ∝ e−V .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The focal point of this work are the first order optimality equations for the minimization (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1): ∇m,SKL( N(m, S) ∥ π) �� (m,S)=( ˆm, ˆS) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This is also equivalent to setting the Bures-Wasserstein gradient of KL( p ∥ π) to zero at p = N( ˆm, ˆS) as in [LCB+22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Explicitly, we obtain that (m, S) = ( ˆm, ˆS) is a solution to E [∇V (m + S1/2Z)] = 0, E [∇2V (m + S1/2Z)] = S−1, (EV ) where Z ∼ N(0, Id) and S1/2 is the positive definite symmetric square root of S;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' see [LCB+22] for this calculation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In some sense, the fact that N( ˆm, ˆS) minimizes the KL divergence to π does not explain why ˆm is such an accurate estimate of mπ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Rather, the true reason has to do with properties of solutions to the fixed point equations (EV ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To see why, consider the function ¯V (x) = V ( ˆm + ˆS1/2x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' If π ∝ e−V is close to the density of N( ˆm, ˆS), then ¯π ∝ e− ¯V should be close to the density of N(0, Id).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In other words, we should have that ¯V (x) ≈ const.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' + ∥x∥2/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This is ensured by the first order optimality equations (EV ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Indeed, note that (EV ) can be written in terms of ¯V as E [∇ ¯V (Z)] = 0, E [∇2 ¯V (Z)] = Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5) 3 Mean approx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' error m - mπ ( 10-2 10-3 m = m* (Laplace) m = mn (Gaussian VI) 10- 10-6 102 103 nCovariance approx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' error IlS - Shm S = (n-v(m×))-1 (Laplace) S = Sn (Gaussian VI) 10-3 10- 10-5 102 103 nAs we explain in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4, the equations (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5) set the first and second order coefficients in the Hermite series expansion of ¯V to 0 and Id, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' As a result, ¯V (x) − ∥x∥2/2 = const.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' + r3(x), where r3 is a Hermite series containing only third and higher order Hermite polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The accuracy of the Gaussian VI mean and covariance estimates stems from the fact that the Hermite remainder r3 is of order r3 ∼ 1/√n, and the fact that r3 is orthogonal to linear and quadratic functions with respect to the Gaussian measure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' See Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4 for a high-level summary of this Hermite series based error analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Related Work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The literature on VI can be roughly divided into statistical and algorithmic works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Works on the statistical side have focused on the contraction of variational posteriors around a ground truth parameter in the large n (sample size) regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (We use “variational posterior” as an abbreviation for variational approximation to the posterior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=') For example, [WB19] prove an analogue of the Bernstein-von Mises theorem for the variational posterior, [ZG20] study the contraction rate of the variational posterior around the ground truth in a nonparametric setting, and [AR20] study the contraction rate of variational approximations to tempered posteriors, in high dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' A key difference between these works and ours is that here, we determine how well the statistics of the variational posterior match those of the posterior itself, rather than those of a limiting (n → ∞) distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We are only aware of one other work studying the problem of quantifying posterior approximation accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In [HY19], the authors consider a Bayesian model with “local” latent variables (one per data point) and global latent variables, and they study the mean field variational approximation, given by the product measure closest to the true posterior in terms of KL divergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' They show that the the mean ˆm of their approximation satisfies √n∥ ˆm − mπ∥ ≲ 1/n1/4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Since the algorithmic side of VI is not our focus here, we simply refer the reader to the work [LCB+22] and references therein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This work complements our analysis in that it provides rigorous convergence guarantees for an algorithm that solves the optimization problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Organization of the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The rest of the paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In Section 2, we first redefine ( ˆm, ˆS) as a certain “canonical” solution to the first order optimality conditions (EV ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We then state our assumptions and main result on the Gaussian VI mean and covariance approximation errors, and present a numerical result confirming the n scaling of our bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In Section 3, we give an overview of the proof, and in Section 4 we flesh out the details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Section 5 outlines the proof of the existence and uniqueness of the aforementioned “canonical” solution ( ˆm, ˆS) to (EV ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In the Appendix, we derive a multivariate Her- mite series remainder formula and then prove a number of supplementary results omitted from the main text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Notation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For two k-tensors T, Q ∈ (Rd)⊗k, we define ⟨T, Q⟩ = d � i1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=',ik=1 Ti1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='ikQi1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='ik, and let ∥T∥F = ⟨T, T⟩1/2 be the Frobenius norm of T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We will more often make use of the operator norm of T, denoted simply by ∥ · ∥: ∥T∥ = sup ∥u1∥≤1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=',∥uk∥≤1 ⟨u1 ⊗ · · · ⊗ uk, T⟩, (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='6) where the supremum is over vectors u1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , uk ∈ Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For positive scalars a, b, we write a ≲ b to denote that a ≤ Cb for an absolute constant C (the only exception to this notation is (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3) above, in which ≲ also incorporated a v dependent factor).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We let mπ = E π[X], Sπ = Covπ(X) = E π[(X − mπ)(X − mπ)T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Finally, for a function V with a unique global minimizer m∗, we let HV denote ∇2V (m∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 2 Statement of Main Result Throughout the rest of the paper, we write π ∝ e−nv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that v may depend on n in a mild fashion as is often the case for Bayesian posteriors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We also define V = nv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 4 In light of the centrality of the fixed point equations (EV ), we begin the section by redefining ( ˆm, ˆS) as solutions to (EV ) rather than minimizers of the KL divergence objective (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' These definitions diverge only in the case that V is not strongly convex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Indeed, if V is strongly convex then KL( · ∥ π) is strongly geodesically convex in the submanifold of normal distributions;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=', [LCB+22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore, in this case, there is a unique minimizer ˆπ of the KL divergence, corresponding to a unique solution ( ˆm, ˆS) ∈ Rd × Sd ++ to (EV ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In general, however, if (m, S) solve (EV ) this does not guarantee that m is a good estimator of mπ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To see this, consider the equations in the following form, recalling that v = V/n: E [∇v(m + S1/2Z)] = 0, S E [∇2v(m + S1/2Z)] = 1 nId.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) Let x ̸= m∗ be a critical point of v, that is, ∇v(x) = 0, and consider the pair (m, S) = (x, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For this (m, S) we have E [∇v(m + S1/2Z)] = ∇v(x) = 0, S E [∇2v(m + S1/2Z)] = 0 ≈ 1 nId.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Thus (x, 0) is an approximate solution to (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1), and by continuity, we expect that there is an exact solution nearby.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In other words, to each critical point x of v is associated a solution (m, S) ≈ (x, 0) of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The solution (m, S) of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) which we are interested in, then, is the one near (m∗, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma 1 below formalizes this intuition;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' we show there is a unique solution (m, S) to (EV ) in the set RV = � (m, S) ∈ Rd×Sd ++ : S ⪯ 2H−1, ∥ √ H √ S∥2 + ∥ √ H(m − m∗)∥2 ≤ 8 � , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2) where H = ∇2V (m∗) = n∇2v(m∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that due to the scaling of H with n, the set RV is a small neighborhood of (m∗, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We call this unique solution (m, S) in RV the “canonical” solution of (EV ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We expect the Gaussian distribution corresponding to this canonical solution to be the minimizer of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1), although we have not proved this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Regardless of whether it is true, we will redefine ( ˆm, ˆS) to denote the canonical solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Indeed, whether or not N( ˆm, ˆS) actually minimizes the KL divergence or is only a local minimizer is immaterial for the purpose of estimating mπ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In the rest of this section, we state our assumptions on v, a lemma guaranteeing a canonical solution ˆm, ˆS to (EV ), and our main results bounding the mean and covariance errors of the Laplace and Gaussian VI approximations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 Assumptions Our main theorem rests on rather mild assumptions on the regularity of the potential v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Assumption V0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The function v is at least C3 and has a unique global minimizer x = m∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let α2 be a lower bound on λmin(∇2v(m∗)) and β2 be an upper bound on λmax(∇2v(m∗)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Assumption V1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' There exists r > 0 such that N := nr ≥ d3 and √r α2√α2 sup ∥y∥≤1 ���∇3v � m∗ + � r/α2 y ���� ≤ 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3) Note that the left-hand side of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3) is monotonically increasing with r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Indeed, changing variables to z = � r/α2 y, we see that the supremum is taken over the domain {∥z∥ ≤ � r/α2}, which grows with r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Furthermore, the left-hand side equals zero when r = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore, this assumption states that we can increase r from 0 up to a large multiple of d3/n, while keeping the left-hand side below 1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Define β3 = 1 2α2√α2/√r, so that r = 1 4α3 2/β2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' By Assumption V1, we have sup ∥y∥≤1 ∥∇3v(m∗ + � r/α2 y)∥ ≤ β3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 5 Hence, we can also think of β3 as an upper bound on ∥∇3v∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For future reference, we also define C2,3 := 1/r = 4β2 3 α3 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4) Assumption V2 (Polynomial growth of ∥∇kv∥, k = 3, 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For some 0 < q ≲ 1 we have √r α2√α2 ���∇3v � m∗ + � r/α2 y ���� ≤ 1 + ∥y∥q, ∀y ∈ Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5) Here, r is from Assumption V1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' If v is C4, we additionally assume that r α2 2 ���∇4v � m∗ + � r/α2 y ���� ≲ 1 + ∥y∥q, ∀y ∈ Rd with the same q and r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that Assumption V1 guarantees that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5) is satisfied inside the unit ball {∥y∥ ≤ 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5) simply states that we can extend the constant bound 1/2 to a polynomial bound outside the unit ball.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Also, note that if (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5) is satisfied for some q only up to a constant factor (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' ≲) in the region {∥y∥ ≥ 1} then we can always increase q to ensure the inequality is satisfied exactly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Assumption V3 (Growth of v away from the minimum).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let q be as in Assumption V2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then v (m∗ + x) ≥ d + 12q + 36 n log( � nβ2∥x∥), ∀∥x∥ ≥ � r/β2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='6) See Section 3 below for further explanation of the intuition behind and consequences of the above as- sumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2 Main result We are now ready to state our main results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' First, we characterize the Gaussian VI parameters ( ˆmπ, ˆSπ): Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let Assumptions V0, V1, V2 be satisfied and assume √nr/d ≥ 40 √ 2( √ 3 + � (2q)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' ), where r, q are from Assumptions V1, V2, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Define H = ∇2V (m∗) = n∇2v(m∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then there exists a unique (m, S) = ( ˆmπ, ˆSπ) in the set RV = � (m, S) ∈ Rd × Sd ++ : S ⪯ 2H−1, ∥ √ H √ S∥2 + ∥ √ H(m − m∗)∥2 ≤ 8 � which solves (EV ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Moreover, ˆSπ satisfies 2/3 nβ2 Id ⪯ ˆSπ ⪯ 2 nα2 Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='7) We now state our bounds on the mean and covariance errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For simplicity, we restrict ourselves to the case v ∈ C4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' See Theorem 1-W for results in the case v ∈ C3 \\ C4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Theorem 1 (Accuracy of Gaussian VI).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let Assumption V3 and the assumptions from Lemma 1 be satisfied, and ˆmπ, ˆSπ be as in this lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Recall the definition of C2,3 from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' If v ∈ C4, then ∥ ˆmπ − mπ∥ ≲ 1 √nα2 �C2,3d3 n �3/2 ∥ ˆSπ − Sπ∥ ≲ 1 nα2 C2,3d3 n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='8) In Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3, we prove that Lemma 1 and Theorem 1 are a consequence of analogous statements for a certain affine invariant density.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' See that subsection, and Section 3 more generally, for proof overviews.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3 An example: Logistic Regression As noted in the introduction, our results show that Gaussian VI yields very accurate mean and covariance approximations;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' in fact, the mean estimate is a full factor of 1/n more accurate than the mean estimate given by the Laplace approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Neither our bounds nor those on the Laplace error in [Spo22] and [KGB22] are proven to be tight, but we will now confirm numerically that the bounds give the correct asymptotic scalings with n for a logistic regression example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We also show how to check the assumptions for this example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In logistic regression, we observe n covariates xi ∈ Rd and corresponding labels yi ∈ {0, 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The labels are generated randomly from the covariates and a parameter z ∈ Rd via p(yi | xi, z) = s(xT i z)yi(1 − s(xT i z))1−yi, where s(a) = (1 + e−a)−1 is the sigmoid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In other words, yi ∼ Bern(s(xT i z)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We take the ground truth z to be z = e1 = (1, 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , 0), and we generate the xi, i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , n i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' from N(0, λ2Id), so in particular the covariates themselves do not depend on z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We take a flat prior, so that the posterior distribution of z is simply the likelihood, π(z) = πn(z | x1:n) ∝ e−nv(z), where v(z) = − 1 n n � i=1 log p(yi | xi, z) = − 1 n n � i=1 � yi log s(xT i z) + (1 − yi) log(1 − s(xT i z)) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='9) Numerical Simulation For the numerical simulation displayed in Figure 1, we take d = 2 and n = 100, 200, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , 1000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For each n, we draw ten sets of covariates xi, i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , n from N(0, λ2Id) with λ = √ 5, yielding ten posterior distributions πn(· | x1:n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We then compute the Laplace and VI mean and covariance approximation errors for each n and each of the ten posteriors at a given n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The solid lines in Figure 1 depict the average approximation errors over the ten distributions at each n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The shaded regions depict the spread of the middle six out of ten approximation errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' See Appendix D for details about the simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In the left panel of Figure 1 depicting the mean error, the slopes of the best fit lines are −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='04 and −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='02 for Laplace and Gaussian VI, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For the covariance error in the righthand panel, the slopes of the best fit lines are −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='09 and −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='12 for Laplace and Gaussian VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This confirms that our bounds, the mean bound of [KGB22] and the bound (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4) (also implied by results in [Spo22]) are tight in their n dependence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Verification of Assumptions It is well known that the likelihood (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='9) is convex, and has a finite global minimizer z = m∗ (the MLE) provided the data xi, i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , n are not linearly separable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Assumption V0 is satisfied in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For simplicity, we verify the remaining assumptions in the case that n is large enough that we can approximate v by the population log likelihood v∞, whose global minimizer is m∗ = e1, the ground truth vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Using this approximation, we show in Appendix D that α2 ≳ λ2s′(λ), β2 ≤ λ2 4 , and ∥∇3v∞(z)∥ ≤ β3 := 2λ3, ∀z ∈ Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='10) To verify Assumption V1, we need to find r such that sup ∥z−m∗∥≤√ r/α2 ∥∇3v(z)∥ ≤ α3/2 2 2√r .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Using the uniform bound (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='10) on ∥∇3v∥, it suffices to take r = α3 2 4β2 3 , in which case C2,3 = 1 r = 4β2 3 α3 2 ≲ 1 s′(λ)3 ≲ (1 + cosh(λ))3, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='11) 7 using that s′(λ) = s(λ)(1 − s(λ)) = 1 2(1 + cosh(λ))−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Thus Assumption V1 is satisfied as long as n ≥ d3/r, which is true provided n is larger than a constant multiple of (1 + cosh(λ))3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Next, we can use (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='10) and a similar bound on ∥∇4v∥ (which is also bounded uniformly over Rd) to show that Assumption V2 is satisfied with q = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' It remains to check Assumption V3, which we do in Appendix D using the convexity of v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Indeed, convexity immediately implies at least linear growth away from any point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We conclude that the conditions of Theorem 1 are met.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 3 Proof Overview: Affine Invariant Rescaling and Hermite Ex- pansion In this section, we overview the proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We start in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 by explaining the affine invariance inherent to this problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This motivates us to rescale V = nv to obtain a new affine invariant function W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2, we state Assumptions W0-W3 on W, which include the definition of a scale-free parameter N intrinsic to W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We then state our main results for W: Lemma 1-W and Theorem 1-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3, we deduce Lemma 1 and Theorem 1 for V from the lemma and theorem for W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We outline the proof of Theorem 1-W in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The proof of Lemma 1-W is of a different flavor, and is postponed to Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 Affine Invariance To prove Theorem 1, we will bound the quantities ∥ ˆS−1/2 π ( ˆmπ − mπ)∥, ∥ ˆS−1/2 π Sπ ˆS−1/2 π − Id∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) As shown in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3, combining the bounds on (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) with bounds on ∥ ˆSπ∥ will give the desired estimates in Theorems 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The reason for considering (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) rather than directly bounding the quantities in the theorems is explained in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In the following lemma, we show that the quantities (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) are affine invariant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We discuss the implications of this fact at the end of the subsection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' First, define Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let f be a C2 function with unique global minimizer m∗f, and let Hf = ∇2f(m∗f).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then Rf = � (m, S) ∈ Rd×Sd ++ : S ⪯ 2H−1 f , ∥ √ Hf √ S∥2 + ∥ √ Hf(m − m∗f)∥2 ≤ 8 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2) Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let V1, V2 ∈ C2(Rd), where V2(x) = V1(Ax + b) for some b ∈ Rd and invertible A ∈ Rd×d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let πi ∝ e−Vi, i = 1, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then the pair ( ˆm1, ˆS1) is a unique solution to (EV1) in the set RV1 if and only if the pair ( ˆm2, ˆS2) given by ˆm2 = A−1( ˆm1 − b), ˆS2 = A−1 ˆS1A−T (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3) is a unique solution to (EV2) in the set RV2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Furthermore, ∥ ˆS−1/2 2 ( ˆm2 − mπ2)∥ = ∥ ˆS−1/2 1 ( ˆm1 − mπ1)∥, ∥ ˆS−1/2 2 Sπ2 ˆS−1/2 2 − Id∥ = ∥ ˆS−1/2 1 Sπ1 ˆS−1/2 1 − Id∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4) See Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 of Appendix C for the proof of the first statement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The proof of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4) follows from (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3), the fact that mπ2 = A−1(mπ1 − b), Sπ2 = A−1Sπ1A−T , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5) and the following lemma Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let C, D ∈ Sd ++ be symmetric positive definite matrices and x ∈ Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then ∥C−1/2x∥ = √ xT C−1x and ∥C−1/2DC−1/2 − Id∥ = sup u̸=0 uT Du uT Cu − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This is a simple linear algebra result, so we omit the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 8 Discussion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 shows that our bounds on the quantities (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) should themselves be affine invariant, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' the same bounds should hold if we replace V = nv by any function in the set {V (A·+b) : A ∈ Rd×d invertible, b ∈ Rd}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This motivates us to identify an affine-invariant large parameter N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' It is clear that n itself cannot be the correct parameter N because n is not well-defined: nv = (n/c)(cv) for any c > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Another natural candidate for N, which removes this degree of freedom, is N = λmin(∇2V (m∗)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' However, λmin(∇2V (m∗)) is not affine-invariant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Indeed, replacing V (x) by V (cx), for example, changes λmin by a factor of c2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To obtain an affine invariant bound, we will define N in Assumption W1 below as a parameter intrinsic to the function W = W[V ] given by W(x) = V (H−1/2 V x + m∗V ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='6) It is straightforward to show that for any other V2(x) = V (Ax + b), we have W[V2](x) = V2(H−1/2 V2 x + m∗V2) = V (H−1/2 V x + m∗V ) = W[V ](x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In other words, any function V2 in the set {V (A · +b) : A ∈ Rd×d invertible, b ∈ Rd} maps to the same, affine invariant W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This function is the “correct” object of study, and any bounds we obtain must follow from properties intrinsic to W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2 Assumptions and Results for W In this section, we state assumptions on W, one of which identifies an appropriate affine invariant parameter N intrinsic to W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This parameter is such that as N increases, the measure ρ ∝ e−W is more closely approximated by a Gaussian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We then state results on the existence and uniqueness of solutions ˆmρ, ˆSρ to the first order optimality equations (EW ), and obtain bounds in terms of d and N on the quality of the VI approximation to the mean and covariance of ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Assumption W0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let W be at least C3, with unique global minimizer x = 0, and ∇2W(0) = Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Moreover, assume without loss of generality that W(0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Next, we identify N as a parameter quantifying the size of ∥∇3W∥ in a certain neighborhood of zero: Assumption W1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' There exists N ≥ d3 such that √ N sup ∥x∥≤1 ∥∇3W( √ Nx)∥ ≤ 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='7) This definition ensures that N scales proportionally to n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Indeed, suppose W1 is the affine invariant function corresponding to the equivalence class containing n1v, and let W1 satisfy (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='7) with N = N1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then the affine invariant W2 corresponding to the equivalence class containing n2v is given by W2(x) = n2 n1 W1( √n1 √n2 x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' From here it is straightforward to see that W2 satisfies (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='7) with N = N2, where N2/N1 = n2/n1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To further understand the intuition behind this assumption, consider the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let W satisfy Assumptions W0 and W1 and let C ≤ � N/d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then ����W(x) − ∥x∥2 2 ���� ≤ C3 12 d √ d √ N , ∀∥x∥ ≤ C √ d, W(x) ≥ ∥x∥2 4 , ∀∥x∥ ≤ √ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='8) The lemma shows that N quantifies how close W is to a quadratic, and therefore how close ρ ∝ e−W is to being Gaussian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Taylor expanding W(x) to second order for ∥x∥ ≤ C √ d and using (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='7), we have |W(x) − ∥x∥2/2| ≤ 1 3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' sup ∥x∥≤C √ d ∥x∥3∥∇3W(x)∥ ≤ C3 12 d √ d √ N .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='9) 9 The second inequality in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='8) follows from the fact that ∇2W(x) ⪰ 1 2Id for all ∥x∥ ≤ √ N, as we now show.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Taylor expanding ∇2W(x) to zeroth order, we get that ∥∇2W(x) − ∇2W(0)∥ ≤ sup ∥x∥≤ √ N ∥x∥∥∇3W(x)∥ ≤ 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Since ∇2W(0) = Id it follows that ∇2W(x) ⪰ 1 2Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Assumption W2 (Polynomial growth of ∥∇kW∥, k = 3, 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' There exists 0 < q ≲ 1 such that √ N ���∇3W �√ Nx ���� ≤ 1 + ∥x∥q ∀x ∈ Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='10) If W is C4, then the following bound also holds with the same q: N ���∇4W �√ Nx ���� ≲ 1 + ∥x∥q, ∀x ∈ Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='11) The N 1 in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='11) is also chosen to respect the proportional scaling of N with n: if the affine invariant W1 corresponding to n1v satisfies (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='11) with N = N1, then the affine invariant W2 corresponding to n2v satisfies (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='11) with the same q and N = N2, where N2/N1 = n2/n1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that Assumption W1 guarantees that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='10) is satisfied inside the unit ball;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' therefore, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='10) simply states that we can extend the constant bound 1/2 to a polynomial bound outside of this ball.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Also, note that if the inequality is satisfied for some q only up to a constant factor (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' ≲) in the region {∥x∥ ≥ 1}, then we can always increase q to ensure the inequality is satisfied exactly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Assumption W2 implies that expectations of the form E [∥∇kW(Y )∥p], k = 3, 4, decay with N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Indeed, we have Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let p ≥ 0 and Y ∈ Rd be a random variable such that E [∥Y ∥pq] < ∞, where q is from Assumption W2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let k = 3 or 4, corresponding to the cases W ∈ C3 or W ∈ C4, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then E [∥∇kW(Y )∥p] ≲ N −p( k 2 −1) � 1 + E � ∥Y/ √ d∥pq�� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' By Assumption W2, E [∥∇kW(Y )∥p] ≲ N −p( k 2 −1)E �� 1 + ∥Y/ √ N∥q�p� ≤ N −p( k 2 −1)E �� 1 + ∥Y/ √ d∥q�p� ≲ N −p( k 2 −1) � 1 + E � ∥Y/ √ d∥pq�� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='12) In the second line we used that d ≤ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' If E [∥Y/ √ d∥pq] is d-independent, as for Gaussian random variables, then the above bound reduces to E [∥∇kW(Y )∥p] ≲ N −p( k 2 −1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Assumption W3 (Separation from Zero;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Growth at Infinity).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We have W(x) ≥ (d + 12q + 36) log ∥x∥, ∀∥x∥ ≥ √ N, where q is from Assumption W2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For consistency with the previous assumptions, let us also reformulate this one in terms of W( √ Nx): W( √ Nx) ≥ (d + 12q + 36) log √ N + (d + 12q + 36) log ∥x∥, ∀∥x∥ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='13) Recall that inside the unit ball, W( √ Nx) is no less than N∥x∥2/4, by Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore, the value of W( √ Nx) increases up to at least N/4 as x approaches unit norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We can interpret (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='13) as saying that outside the unit ball, we must maintain constant separation of order d log N from zero, and W(x) must grow at least logarithmically in ∥x∥ as ∥x∥ → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 10 We now state the existence and uniqueness of solutions to (EW ) in the region RW .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma 1-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Take Assumptions W0, W1, and W2 to be true, and assume √ N/d ≥ 40 √ 2( √ 3 + � (2q)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' ), where q is from Assumption W2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then there exists a unique (m, S) = ( ˆmρ, ˆSρ) ∈ RW , RW = {(m, S) ∈ Rd × Sd ++ : S ⪯ 2Id, ∥S∥ + ∥m∥2 ≤ 8}, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='14) solving (EW ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The matrix ˆSρ furthermore satisfies 2 3Id ⪯ ˆSρ ⪯ 2Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='15) See Section 5 for the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that RW as defined here is the same as in Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1, since m∗W = 0 and HW = ∇2W(0) = Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We will make frequent use of the following inequality, which summarizes the bounds on ˆmρ, ˆSρ guaranteed by the lemma: ∥ ˆmρ∥ ≤ 2 √ 2, 2 3Id ⪯ ˆSρ ⪯ 2Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='16) Theorem 1-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Take Assumptions W0, W1, W2, and W3 to be true, and let ( ˆmρ, ˆSρ) be as in the above lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then ∥ ˆS−1/2 ρ ( ˆmρ − mρ)∥ ≲ � � � d3 N if W ∈ C3, � d3 N �3/2 , if W ∈ C4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , ∥ ˆS−1/2 ρ Sρ ˆS−1/2 ρ − Id∥ ≲ d3 N .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3 From V to W and back In the following sections, we prove Lemma 1-W and Theorem 1-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2 in the appendix, we show that Assumptions V0-V3 imply Assumptions W0-W3 with N = nr and the same q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' From these results, Lemma 1 and Theorem 1 easily follow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof of Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let ρ ∝ e−W , where W is defined as in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' By Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2, the assumptions on V imply the assumptions on W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Hence, we can apply Lemma 1-W to conclude there is a unique ( ˆmρ, ˆSρ) ∈ RW solving (EW ), with 2 3Id ⪯ ˆSρ ⪯ 2Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Since W is an affine transformation of V , it follows by Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 that there exists a unique ( ˆmπ, ˆSπ) ∈ RV solving (EV ), with ˆSπ = H−1/2 V ˆSρH−1/2 V .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The inequality (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='7) for π can be deduced from the corresponding inequality (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='15) for ˆSρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' First note that ∥ ˆmπ − mπ∥ ≤ ∥ ˆS1/2 π ∥∥ ˆS−1/2 π ( ˆmπ − mπ)∥ ≲ 1 √nα2 ∥ ˆS−1/2 π ( ˆmπ − mπ)∥, and ∥ ˆSπ − Sπ∥ = ∥ ˆS1/2 π ( ˆS−1/2 π Sπ ˆS−1/2 π − Id) ˆS1/2 π ∥ ≲ 1 nα2 ∥ ˆS−1/2 π Sπ ˆS−1/2 π − Id∥, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='17) using the bound on ˆSπ from Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Next note that by Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 (affine invariance) we have ∥ ˆS−1/2 π ( ˆmπ − mπ)∥ = ∥ ˆS−1/2 ρ ( ˆmρ − mρ)∥, ∥ ˆS−1/2 π Sπ ˆS−1/2 π − Id∥ = ∥ ˆS−1/2 ρ Sρ ˆS−1/2 ρ − Id∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='18) Apply Theorem 1-W to conclude, recalling that N = nr and C2,3 = 1/r so that d3/N = d3/nr = C2,3d3/n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 11 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4 Overview of Theorem 1-W proof For brevity let m = ˆmρ, S = ˆSρ, and σ = S1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We continue to denote the mean and covariance of ρ by mρ and Sρ, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let ¯W(x) = W(m + σx) and note that the optimality equations (EW ) can be written as E [∇ ¯W(Z)] = 0, E [∇2 ¯W(Z)] = Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='19) The proof of Theorem 1-W is based on several key observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 1) The optimality conditions (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='19) imply that the Hermite series expansion of ¯W is given by ¯W(x) = const.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' + 1 2∥x∥2 + r3(x), where r3(x) = � k≥3 1 k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='⟨ck( ¯W), Hk(x)⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='20) 2) The assumptions on W imply that r3 ∼ N −1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 3) We can represent the quantities of interest from Theorem 1-W as expectations with respect to ¯X ∼ ¯ρ ∝ e− ¯ W : ∥σ−1(mρ − m)∥ = sup ∥u∥=1 E [f1,u( ¯X)], ∥σ−1Sρσ−1 − Id∥ ≤ sup ∥u∥=1 E [f2,u( ¯X)] + ∥σ−1(mρ − m)∥2, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='21) where f1,u(x) = uT x, f2,u(x) = (uT x)2 − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 4) We have E [f( ¯X)] = E [f(Z)e−r3(Z)] E [e−r3(Z)] = E [f(Z)(1 − r3(Z) + r3(Z)2/2 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' )] E [e−r3(Z)] (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='22) 5) We have E [f(Z)] = 0 and E [f(Z)r3(Z)] = 0 for f = f1,u, f2,u, because the remainder r3 is orthogonal to linear and quadratic f with respect to the Gaussian measure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore, the leading order term in E [f( ¯X)] is 1 2E [f(Z)r3(Z)2] ∼ N −1 for both f = f1,u and f = f2,u, and hence by (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='21) the quantities of interest are no larger than N −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This is the essence of the proof when W ∈ C3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Now that we have given this overview, let us go into a few more details about the above points, and consider the case W ∈ C4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 1) We can write W(x) = const.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' + 1 2∥x∥2 + r3(x), where r3 is the third order Hermite series remainder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The Hermite series expansion of ¯W is defined as ¯W(x) = ∞ � k=0 1 k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='⟨ck( ¯W), Hk(x)⟩, ck( ¯W) := E [ ¯W(Z)Hk(Z)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='23) Here, the ck and Hk(x) are tensors in (Rd)⊗k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Specifically, Hk(x) is the tensor of all order k Hermite polynomials, enumerated as H(α) k , α ∈ [d]k with some entries repeating.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For k = 0, 1, 2, the Hermite tensors are given by H0(x) = 1, H1(x) = x, H2(x) = xxT − Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' See Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 for further details on Hermite series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Distinct Hermite polynomials are orthogonal to each other with respect to the Gaussian weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In particular, if f is an order k polynomial and ℓ > k then E [f(Z)H(α) ℓ (Z)] = 0, ∀α ∈ [d]ℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 12 In general, the Hk are given by Hk(x)e−∥x∥2/2 = (−1)k∇ke−∥x∥2/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='24) This representation of the Hermite polynomials leads to the following, “Gaussian integration by parts” identity for a k-times differentiable function f: E [f(Z)Hk(Z)] = E [∇kf(Z)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='25) This is a generalization of Stein’s identity, E [Zif(Z)] = E [∂xif(Z)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Since ¯W is at least three times differentiable, we can use Gaussian integration by parts to write c1, c2 as c1( ¯W) := E [ ¯W(Z)H1(Z)] = E [∇ ¯W(Z)] = 0, c2( ¯W) := E [ ¯W(Z)H2(Z)] = E [∇2 ¯W(Z)] = Id, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='26) where the last equality in each line comes from the optimality conditions (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore the Hermite series expansion of ¯W takes the form ¯W(x) = E [ ¯W(Z)] + ⟨0, H1(x)⟩ + 1 2⟨Id, H2(x)⟩ + r3(x) = E [ ¯W(Z)] + 0 + 1 2(∥x∥2 − d) + r3(x) = const.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' + 1 2∥x∥2 + r3(x), (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='27) where r3 is the third order remainder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 2) The assumptions imply r3 ∼ 1/ √ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Indeed, since W is C3 and k ≥ 3, we can apply “partial” Gaussian integration by parts to express ck as ck = E [Hk(Z) ¯W(Z)] = E [Hk−3(Z) ⊗ ∇3 ¯W(Z)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' But by Assumptions W1 and W2 we have that ∥∇3W∥ ∼ 1/ √ N, and hence ∥∇3 ¯W∥ ≤ ∥σ∥3∥∇W∥ ∼ 1/ √ N, since σ ⪯ √ 2Id by Lemma 1-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore each ck ∼ 1/ √ N for k ≥ 3, so r3 ∼ 1/ √ N as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Now suppose W ∈ C4, and write r3 as r3(x) = 1 3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='⟨c3, H3(x)⟩ + r4(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We know ⟨c3, H3(x)⟩ ∼ 1/ √ N, and by an analogous argument as for r3, we can show that each of the coefficients ck, k ≥ 4 has order 1/N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Hence r4 ∼ 1/N, so that r3 = O(N −1/2) + O(N −1) and r2 3 = O(N −1) + O(N −3/2) + O(N −2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We can then show that the order N −1 term in r2 3 is orthogonal to f1,u with respect to the Gaussian weight, and hence E [f1,u(Z)r3(Z)2] is order N −3/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This is why the mean error is smaller when W ∈ C4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We will prove 3) in the next section, and 4) follows directly from the representation (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='27).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 5) follows from the definition of r3 as a sum of third and higher order Hermite polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This discussion explains how the N −1 and N −3/2 scalings arise in Theorem 1-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Obtaining the correct scaling with dimension d requires a bit more work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The scaling with d of the overall error bound depends, among other things, on the scaling with d of expectations of the form E [rk(Z)p], k = 3, 4 (see Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 below for further discussion of the bound’s d-dependence).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We show that E [rk(Z)p] ∼ E [ � ∥Z∥k�p] ∼ dpk/2 using the following explicit formula for rk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This result is known in one dimension;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' see Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='15 in [Leb72].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' However, we could not find the multidimensional version in the literature, so we have proved it here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Assume ¯W ∈ Ck for k = 3 or k = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let ¯W(x) = �∞ j=0 1 j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='⟨cj( ¯W), Hj(x)⟩ be the Hermite series expansion of ¯W, and define rk(x) = ¯W(x) − k−1 � j=0 1 j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='⟨cj( ¯W), Hj(x)⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='28) Then rk(x) = � 1 0 (1 − t)k−1 (k − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' E �� ∇k ¯W ((1 − t)Z + tx) , Hk(x) − Z ⊗ Hk−1(x) �� dt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='29) 13 Note that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='29) is analogous to the integral form of the remainder of a Taylor series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We state and prove this proposition in greater generality in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Carefully applying Cauchy-Schwarz to the inner product in this formula (using the operator norm rather than the Frobenius norm, which would incur additional dimension dependence), allows us to bound E [|rk(Z)|p] by a product of expectations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' One ex- pectation involves ∥∇k ¯W∥p ∼ N p(1−k/2), and the other expectation, stemming from the Hk and Z ⊗ Hk−1 on the right-hand side of the inner product, involves a (pk)th degree polynomial in ∥Z∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This explains the dpk/2 scaling of E [|rk(Z)|p].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 4 Proof of Theorem 1-W Let ¯X ∼ ¯ρ ∝ e− ¯ W , where ¯W(x) = W(m + σx), and σ = ˆS1/2 ρ , m = ˆmρ are from Lemma 1-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Also, let r3(x) = � k≥3 1 k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='⟨ck( ¯W), Hk(x)⟩ be the remainder of the Hermite expansion of ¯W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 (Preliminary Bound).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' If v ∈ C3, then we have ∥σ−1(m − mρ)∥ ≲ � E r3(Z)4 + � E r3(Z)6 + � E r3( ¯X)6 sup ∥u∥=1 � E (uT ¯X)2 (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) and ∥σ−1Sρσ−1 − Id∥ ≲∥σ−1(m − mρ)∥2 + � E r3(Z)4 + � E r3(Z)6 + � E r3( ¯X)6 sup ∥u∥=1 � E ((uT ¯X)2 − 1)2 (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2) If v ∈ C4, then ∥σ−1(m − mρ)∥ ≲ � E r3(Z)6 + � E r3( ¯X)6 sup ∥u∥=1 � E (uT ¯X)2 + sup ∥u∥=1 ��� u ⊗ c3 ⊗ c4, E [Z ⊗ H3(Z) ⊗ H4(Z)] ��� + � E r4(Z)4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3) Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' From the discussion in the previous section, we know c3, r3 ∼ N −1/2 and c4, r4 ∼ N −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore, we can easily read off the N-dependence of the overall error bound from (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) and (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The d-dependence of the terms of the form � E [rk(Z)p] can be computed from our explicit formula for rk, as discussed above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Furthermore, simple Laplace-type integral bounds in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3 show that E [f( ¯X)] ≲ E [f(Z)], so the d-dependence of the ¯X expectations is the same as that of the Z expectations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Finally, the d-dependence of ⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3 ⊗ H4]⟩ can be estimated using the structure of the Hermite tensors;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' in particular, we show at most O(d4) of the d8 entries of E [Z ⊗ H3 ⊗ H4] are nonzero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' First, we prove point 3) from the above proof overview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Recall that f1,u(x) = uT x and f2,u(x) = (uT x)2 − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that we can write ¯X = σ−1(X − m), where X ∼ ρ ∝ e−W .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore, E ¯X = σ−1(mρ − m) and hence ∥σ−1(mρ − m)∥ = ∥E ¯X∥ = sup ∥u∥=1 E [uT ¯X] = sup ∥u∥=1 E [f1,u( ¯X)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Next, note that Cov( ¯X) = σ−1Sρσ−1, and hence ∥σ−1Sρσ−1 − Id∥ = ∥Cov( ¯X) − Id∥ ≤ ∥E [ ¯X ¯XT − Id]∥ + ∥E ¯XE ¯XT ∥ ≤ sup ∥u∥=1 E [uT ( ¯X ¯XT − Id)u] + ∥E ¯X∥2 = sup ∥u∥=1 E [(uT ¯X)2 − 1] + ∥E ¯X∥2 = sup ∥u∥=1 E [f2,u( ¯X)] + ∥σ−1(mρ − m)∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4) 14 Now, recalling that ¯W(x) = const.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' + ∥x∥2/2 + r3(x), note that E [f( ¯X)] = E [f(Z)e−r3(Z)] E [e−r3(Z)] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5) Write e−r3(Z) = 1 − r3(Z) + 1 2r3(Z)2 − 1 3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='r3(Z)3eξ(Z), where ξ(Z) lies on the interval between 0 and −r3(Z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The key insight is that for f = f1,u and f = f2,u (at most second order polynomials), f is orthogonal to both 1 and r3, since r3 is a series of Hermite polynomials of order greater than 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore, E � f(Z)e−r3(Z)� = E � f(Z) � 1 − r3(Z) + 1 2r3(Z)2 − 1 3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='r3(Z)3eξ(Z) �� = E � f(Z) �1 2r3(Z)2 − 1 3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='r3(Z)3eξ(Z) �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='6) Combining (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='6) with (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5), we get E[f(Z)] = 1 2 E � f(Z)r3(Z)2� E � e−r3(Z)� − 1 3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' E [f(Z)r3(Z)3eξ(Z)] E � e−r3(Z)� =: I1 + I2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='7) Using Jensen’s inequality and that E [r3(Z)] = 0, we have E [e−r3(Z)] ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Hence, |I1| ≲ ��E � f(Z)r3(Z)2��� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='8) To bound I2, note that eξ ≤ 1 + e−r3, since ξ ≤ 0 if r3 ≥ 0 and ξ ≤ −r3 if r3 ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Hence, |I2| ≲ E � |f(Z)| |r3(Z)|3� E � e−r3(Z)� + E � |f(Z)| |r3(Z)|3 e−r3(Z)� E � e−r3(Z)� ≤ E � |f(Z)| |r3(Z)|3� + E � |f(Z)| |r3(Z)|3 e−r3(Z)� E � e−r3(Z)� , (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='9) again using that E [e−r3(Z)] ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Furthermore, using the conversion between Z and ¯X expectations (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5), observe that E � |f(Z)| |r3(Z)|3 e−r3(Z)� E � e−r3(Z)� = E ���f( ¯X) �� ��r3( ¯X) ��3� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Incorporating this into the above bound on |I2| we get |I2| ≤ E � |f(Z)| |r3(Z)|3� + E ���f( ¯X) �� ��r3( ¯X) ��3� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='10) Applying Cauchy-Schwarz to (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='10) we get |I2| ≤ � E [r3(Z)6] � E f(Z)2 + � E r3( ¯X)6 � E f( ¯X)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Adding this inequality to (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='8), we get ��E [f( ¯X)] �� ≲ ��E � f(Z)r3(Z)2��� + � E [r3(Z)6] � E f(Z)2 + � E � r3( ¯X)6�� E � f( ¯X)2� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='11) Taking f(x) = uT x and f(x) = (uT x)2−1 and applying Cauchy-Schwarz to the first term in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='11) gives (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) and (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' If v ∈ C4 and f(x) = uT x, we can refine the bound (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='11), specifically the first term E [(uT Z)r3(Z)2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Write r3(x) = 1 3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' ⟨c3, H3(x)⟩ + r4(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 15 Then r3(x)2 = 1 (3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' )2 � c⊗2 3 , H3(x)⊗2� + 2 3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='r4(x) ⟨c3, H3(x)⟩ + r4(x)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='12) To get the first summand on the right we use the fact that ⟨T, S⟩2 = ⟨T ⊗2, S⊗2⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Substituting x = Z in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='12), multiplying by the scalar uT Z, and taking the expectation of the result gives E �� uT Z � r3(Z)2� = 1 (3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' )2 � c⊗2 3 , E �� uT Z � H3(Z)⊗2�� + 2 3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='E � (uT Z)r4(Z) ⟨c3, H3(Z)⟩ � + E [(uT Z)r4(Z)2] = 2 3!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='E � (uT Z)r4(Z) ⟨c3, H3(Z)⟩ � + E �� uT Z � r4(Z)2� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='13) For the term on the right-hand side of the first line of (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='13), note that we have chosen to move the scalar uT Z onto the second tensor H⊗2 3 in the tensor dot product, and we take the Z expectation only after doing so.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This term drops out in the second line because each entry of (uT Z)H3(Z)⊗2 is a polynomial containing only odd powers of Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To see why, see the primer on Hermite polynomials in Section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Next, let g(x) = (uT x) ⟨c3, H3(x)⟩, so that E � (uT Z)r4(Z) ⟨c3, H3(Z)⟩ � = E [g(Z)r4(Z)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Since E [g(Z)2] < ∞ and r4 is the tail of a convergent Hermite series, we have E [g(Z)r4(Z)] = E � ∞ � k=4 g(Z) 1 k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' ⟨ck, Hk(Z)⟩ � = ∞ � k=4 1 k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='E [g(Z) ⟨ck, Hk(Z)⟩].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Furthermore, g is a fourth order polynomial, and is therefore orthogonal to all Hermite polynomials of order greater than four.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' As a result, the above sum simplifies to E [g(Z)r4(Z)] = 1 4!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='E [g(Z) ⟨c4, H4(Z)⟩] = 1 4!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='E [(uT Z) ⟨c3, H3(Z)⟩ ⟨c4, H4(Z)⟩] = 1 4!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' ⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3(Z) ⊗ H4(Z)]⟩ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='14) Combining these calculations and applying Cauchy-Schwarz to the term E [(uT Z)r4(Z)2] gives the prelimi- nary bound (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 Combining the bounds In the following sections, we bound each of the terms appearing in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For convenience, we compile these bounds below, letting τ = d3/N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2 gives |⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3 ⊗ H4]⟩| ≲ d 7 2 N 3 2 ≤ τ 3 2 (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='15) Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 applied with Y = Z gives � E [r3(Z)4] ≲ d3 N = τ � E [r3(Z)6] ≲ �d3 N � 3 2 = τ 3 2 � E [r4(Z)4] ≲ d4 N 2 ≤ τ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='16) 16 Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 applied with Y = ¯X, together with Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2, give � E [r3( ¯X)6] ≲ e √ d3/N �d3 N � 3 2 = e √ττ 3 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Finally, Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3 gives sup ∥u∥=1 � E [(uT ¯X)2] ≲ e √ d3/N = e √τ, sup ∥u∥=1 � E [((uT ¯X)2 − 1)2] ≲ e √ d3/N = e √τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='17) Substituting all of these bounds into (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2), and (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3) finishes the proof of Theorem 1-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2 Hermite-related Bounds In this section we bound ⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3(Z) ⊗ H4(Z)]⟩ as well as E [rk(Z)p] for k = 3, 4, p = 4, 6 and E [r3( ¯X)6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We take all of the assumptions to be true, either in the W ∈ C3 case or W ∈ C4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' If v ∈ C4 then |⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3(Z) ⊗ H4(Z)]⟩| ≲ d7/2N −3/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='18) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We use Lemma B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3 in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1, which shows that ⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3(Z) ⊗ H4(Z)]⟩ = ⟨u ⊗ c3, c4⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='19) Writing c3 = �d i,j,k=1 cijk 3 ei ⊗ ej ⊗ ek and noting that |cijk 3 | ≤ ∥c3∥, we get |⟨u ⊗ c3, c4⟩| ≤ d � i,j,k=1 |cijk 3 | |⟨u ⊗ ei ⊗ ej ⊗ ek, c4⟩| ≤ d3∥c3∥∥c4∥ (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='20) As explained in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4, since v ∈ C4 we have ck = ck( ¯W) = E [∇k ¯W(Z)], k = 3, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Hence ∥c3∥ ≤ E ∥∇3 ¯W(Z)∥ ≤ ∥σ∥3E ∥∇3W(m + σZ)∥ ≲ N −1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='21) To get the last inequality, we used (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='16) to bound ∥m∥, ∥σ∥ by a constant, and we applied Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4 with Y = m + σZ, p = 1, k = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that E [∥(m + σZ)/ √ d∥s] ≲ 1 for any s ≥ 0, so the bound in the lemma reduces to N −1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The lemma applies since E [∥m + σZ∥s] ≲ √ d s for all s ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Analogously, ∥c4∥ ≲ N −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='22) Substituting the bounds (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='21), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='22) into (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='20) and using the equality (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='19) gives the bound in the statement of the lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We now compute bounds on expectations of the form E [|rk(Z)|p], k = 3, 4, and on E [r3( ¯X)6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Using the exact formula (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='29) for rk, we obtain the following bound: Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 (Corollary (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let k = 3 if W ∈ C3 and k = 4 if W ∈ C4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let Y ∈ Rd be a random variable such that E [∥Y ∥s] < ∞ for all 0 ≤ s ≤ 2pk + 2pq, where q is from Assumption W2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then E [|rk(Y )|p] ≲ � dk N k−2 � p 2 �� E ∥Y/ √ d∥2kp + � E ∥Y/ √ d∥2(k−1)p + 1 � × � 1 + � E � ∥Y/ √ d∥2pq�� (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='23) 17 Taking Y = Z, the expectations in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='23) are all bounded by constants, so we immediately obtain E [|rk(Z)|p] ≲ � dk N k−2 � p 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The corollary also applies to Y = ¯X, k = 3, p = 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This is because, as we show in Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2 in the next section, E [∥X/ √ d∥s] ≲ exp(2 � d3/N) < ∞ for all s ≤ 36 + 12q = 2pk + 2pq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Since ¯X = σ−1(X − m), using (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='16) we conclude that also E [∥ ¯X/ √ d∥s] ≲ exp(2 � d3/N) < ∞ for all s ≤ 36 + 12q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Hence (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='23) gives E [r3( ¯X)6] ≲ exp � 2 � d3/N � �d3 N �3 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3 Bounds on X Moments In this section we bound expectations of the form E [(aT X)p], ∥a∥ ≲ 1, and E [∥X∥p], both of which take the form E [f(X)] = � Rd f(x)e−W (x)dx � Rd e−W (x)dx , 0 ≤ f(x) ≲ ∥x∥p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='24) To evaluate this integral, we break up the numerator into inner, middle, and outer regions I = � ∥x∥ ≤ 2 √ 2 √ d � , M = � 2 √ 2 √ d ≤ ∥x∥ ≤ √ N � , O = � ∥x∥ ≥ √ N � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We then bound E [f(X)] as E [f(X)] = E � f(X)1I(X) � + E � f(X)1M(X) � + E � f(X)1O(X) � ≲ 1 � I e−W (x)dx �� I f(x)e−W (x)dx + � M ∥x∥pe−W (x)dx + � O ∥x∥pe−W (x)dx � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='25) The inner region I is chosen so that (1) for x ∈ I, we can approximate e−W (x) by e−∥x∥2/2 and (2) the standard Gaussian density places O(1) mass on I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This will allow us to show that � I f(x)e−W (x)dx � I e−W (x)dx ≲ E [f(Z)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The middle region M is chosen so that (1) e−W (x) is bounded by another, greater variance Gaussian density, namely e−∥x∥2/4, and (2) this density places exponentially little mass on M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The bound on � M ∥x∥pe−W (x)dx/ � I e−W (x)dx therefore involves a ratio of Gaussian normalization constants that grows exponentially in d, but this growth is neutralized by the exponentially decaying Gaussian tail probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Finally, in O we use Assumption W3 to bound the integral � O ∥x∥pe−W (x) by a number decaying expo- nentially in n times the tail integral of a function ∥x∥−r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The following four short lemmas carry out this program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We let τ = d3/N in the statements and proofs below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We have � ∥x∥≤2 √ 2 √ d e−W (x)dx ≳ e−2√τ√ 2π d, where τ = d3/N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' By Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3 with C = 2 √ 2, we have e−W (x) ≥ e−∥x∥2/2e− 4 √ 2 3 d √ d/ √ N ≥ e−∥x∥2/2e−2√τ, ∥x∥ ≤ 2 √ 2 √ d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore, � ∥x∥≤2 √ 2 √ d e−W (x)dx ≥ e−2√τ � ∥x∥≤2 √ 2 √ d e− 1 2 ∥x∥2dx = e−2√τ√ 2π dP(∥Z∥ ≤ 2 √ 2 √ d) ≳ e−2√τ√ 2π d, (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='26) as desired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 18 Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let f ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then E [f(X)1I(X)] ≲ e2√τE [f(Z)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In particular, E [∥X∥p1I(X)] ≲ e2√τdp/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Using Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3 and Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3, E [f(X){X ∈ I}] ≤ � I f(x)e−W (x)dx � I e−W (x)dx ≲ e2√τ √ 2π d � I f(x)e−W (x)dx ≲ e2√τ √ 2π d � I f(x)e− 1 2 ∥x∥2dx ≤ e2√τE [f(Z)], (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='27) as desired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We have E [∥X∥p1M(X)] ≲ e2√τ for all p ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Using Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3 and Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3, E [∥X∥p1M(X)] ≤ � M ∥x∥pe−W (x)dx � I e−W (x)dx ≲ e2√τ √ 2π d � M ∥x∥pe−∥x∥2/4dx ≤ e2√τ √ 2π d � ∥x∥≥2 √ 2 √ d ∥x∥pe−∥x∥2/4dx (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='28) We now change variables as x = √ 2y, so that ∥x∥pdx is bounded above by √ 2 d+p∥y∥pdy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Hence E [∥X∥p1M(X)] ≲ √ 2 d+p� e2√τ √ 2π d � ∥y∥≥2 √ d ∥y∥pe− 1 2 ∥y∥2dz � ≲ e2√τ√ 2 d+pE � ∥Z∥p{∥Z∥ ≥ 2 √ d} � ≲ e2√τ �√ 2 d+pdp/2e−d/4� ≲ e2√τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='29) Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For all p ≤ 12q + 36 we have E [∥X∥p1O(X)] ≲ e2√τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Using Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3 and Assumption W3, we get E [∥X∥p1O(X)] ≤ � ∥x∥≥ √ N ∥x∥pe−W (x)dx � I e−W (x)dx ≲ e2√τ � ∥x∥≥ √ N ∥x∥p−d−12q−36dx ≲ e2√τ � ∞ √ N rp−12q−36−1dr ≲ e2√τ√ N p−12q−36 ≲ e2√τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='30) In the third line, we left out the surface area of the (d − 1)-sphere, which is an at most O(1) factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The above three lemmas immediately imply Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For all p ≤ 12q + 36 we have E [∥X∥p] ≲ dp/2e2√τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We also have 19 Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let ¯X = σ−1(X − m), where ∥σ−1∥, ∥m∥ ≲ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' If ∥u∥ = 1 then E [(uT ¯X)2] ≲ e2√τ, E [((uT ¯X)2 − 1)2] ≲ e2√τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We have E [((uT ¯X)2 − 1)2] ≲ E [(uT ¯X)4] + 1, so it suffices to show E [(uT ¯X)k] ≲ e2√τ for k = 2, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Since ¯X = σ−1(X − m), we have E [(uT ¯X)k] ≲ E [(uT σ−1X)k] + ∥σ−1m∥k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' By the assumptions on σ and m, the term ∥σ−1m∥k is bounded by a constant, so it remains to show E [(aT X)k] ≲ e2√τ, where a = σ−1u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Using Lemmas 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5, and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='6 and noting that ∥a∥ ≲ 1, we get E [(aT X)k] ≲ E [(aT X)k1I(X)] + E [∥X∥k1M(X)] + E [∥X∥k1O(X)] ≲ e2√τ(E [(aT Z)k] + 1) ≲ e2√τ, (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='31) as desired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 5 Proof of Lemma 1-W In this section, we use m ∈ Rd, σ ∈ Rd×d to denote generic arguments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Consider the equations (EW ), which we rewrite in the following form: E [∇W(m + σZ)] = 0, (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) E [∇2W(m + σZ)] = (σσT )−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2) Note that these equations are well-defined for all (σ, m) ∈ Rd×d×Rd, although we can only expect uniqueness of solutions in a subset of Sd ++ × Rd;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' indeed, (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) and (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2) only depend on σ through S = σσT , which has multiple solutions σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We now restate Lemma 1-W using the following notation: Br(0, 0) = {(σ, m) ∈ Rd×d × Rd : ∥σ∥2 + ∥m∥2 ≤ r2}, Br = {σ ∈ Rd×d : ∥σ∥ ≤ r}, Sc1,c2 = {σ ∈ Sd + : c1Id ⪯ σ ⪯ c2Id}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3) In particular, note that S0,r ⊂ Br.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let W satisfy Assumptions W0, W1, W2, and W3, and assume √ N/d ≥ 40 √ 2( √ 3+ � (2q)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let r = 2 √ 2, c1 = � 2/3, and c2 = √ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' There exists a unique pair (σ, m) ∈ Br(0, 0) ∩ S0,r/2 × Rd satisfying (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) and (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2), and this pair is such that σ ∈ Sc1,c2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let us sketch the proof of the lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let f : Rd×d × Rd → Rd be given by f(σ, m) = E [∇W(m + σZ)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that f(0, 0) = 0, so by the Implicit Function Theorem, there exists a map m(σ) defined in a neighborhood of σ = 0 such that f(σ, m(σ)) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3, we make this statement quantitative, showing that for r = 2 √ 2 we have the following result: for every σ ∈ Br/2 there is a unique m = m(σ) such that (σ, m) ∈ Br(0, 0) and f(σ, m) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Since S0,r/2 ⊂ Br/2, we have in particular that any solution (σ, m) to (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) in the region Br(0, 0) ∩ S0,r/2 × Rd is of the form (σ, m(σ)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Thus it remains to prove there exists a unique solution σ ∈ S0,r/2 to the equation E [∇2W(m(σ) + σZ)] = (σσT )−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To do so, we rewrite this equation as F(σ) = σ, where F(σ) = E [∇2W(m(σ) + σZ)]−1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We show in Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4 that F is well-defined on S0,r/2, a contraction, and satisfies F(S0,r/2) ⊂ Sc1,c2 ⊂ S0,r/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Thus by the Contraction Mapping Theorem, there is a unique σ ∈ S0,r/2 satisfying F(σ) = σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' But since F maps S0,r/2 to Sc1,c2, the fixed point σ necessarily lies in Sc1,c2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This finishes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Using a quantitative statement of the Inverse Function Theorem given in [Lan93], the following lemma determines the size of the neighborhood in which the map m(σ) is defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 20 Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let f = (f1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , fd) : Rd×d ×Rd → Rd be C3, where Rd×d is the set of d×d matrices, endowed with the standard matrix operator norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Suppose f(0, 0) = 0, ∇σf(0, 0) = 0, ∇mf(σ, m) is symmetric for all m, σ, and ∇mf(0, 0) = Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let r > 0 be such that sup (σ,m)∈Br(0,0) ∥∇f(σ, m) − ∇f(0, m∗)∥op ≤ 1 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4) Then for each σ ∈ Rd×d such that ∥σ∥ ≤ r/2 there exists a unique m = m(σ) ∈ Rd such that f(σ, m(σ)) = 0 and (σ, m(σ)) ∈ Br(0, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Furthermore, the map σ �→ m(σ) is C2, with 1 2Id ⪯ ∇mf(σ, m) �� m=m(σ) ⪯ 3 2Id, ∥∇σm(σ)∥op ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5) See Appendix E for careful definitions of the norms appearing above, as well as the proof of the lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let f : Rd×d × Rd → Rd be given by f(σ, m) = E [∇W(σZ + m)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then all the conditions of Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2 are satisfied;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' in particular, (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4) is satisfied with r = 2 √ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Thus the conclusions of Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2 hold with this choice of r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let r = 2 √ 2 and σ ∈ S0,r/2 �→ m(σ) ∈ Rd be the restriction of the map furnished by Lemmas 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3 to symmetric nonnegative matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then the function F given by F(σ) = E [∇2W(m(σ) + σZ)]−1/2 is well-defined and a strict contraction on S0,r/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Moreover, F(S0,r/2) ⊆ Sc1,c2 ⊆ S0,r/2, where c1 = � 2/3, c2 = √ 2 = r/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This lemma concludes the proof of Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 since by the Contraction Mapping Theorem there is a unique fixed point σ ∈ S0,r/2 of F, and F(σ) = σ is simply a reformulation of the second optimality equation (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We know σ must lie in Sc1,c2 since F maps S0,r/2 to this set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' See Appendix E for the proofs of the above lemmas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Acknowledgments A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Katsevich is supported by NSF grant DMS-2202963.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Rigollet is supported by NSF grants IIS-1838071, DMS-2022448, and CCF-2106377.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' A Hermite Series Remainder A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 Brief Primer Here is a brief primer on Hermite polynomials, polynomials, and series expansions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We let Hk : R → R, k = 0, 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' be the kth order probabilist’s Hermite polynomial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We have H0(x) = 1, H1(x) = x, H2(x) = x2 − 1, H3(x) = x3 − 3x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For all k ≥ 1, we can generate Hk+1 from the recurrence relation Hk+1(x) = xHk(x) − kHk−1(x), k ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) In particular, Hk(x) is an order k polynomial given by a sum of monomials of the same parity as k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The Hk are orthogonal with respect to the Gaussian measure;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' namely, we have E [Hk(Z)Hj(Z)] = k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='δjk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We also note for future reference that E [ZHk(Z)Hk+1(Z)] = E [(Hk+1(Z) + kHk−1(Z)) Hk+1(Z)] = (k + 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=', (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2) using the recurrence relation (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 21 The Hermite polynomials are given by products of Hermite polynomials, and are indexed by γ ∈ {0, 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' }d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let γ = (γ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , γd), with γj ∈ {0, 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then Hγ(x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , xd) = d � j=1 Hγj(xj), which has order |γ| := �d j=1 γj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that if |γ| = k then Hγ(x) is given by a sum of monomials of the same parity as k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Indeed, each Hγj(xj) is a linear combination of xγj−2p j , p ≤ ⌊γj/2⌋.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Thus Hγ(x) is a linear combination of monomials of the form �d j=1 xγj−2pj j , which has total order k − 2 � j pj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Using the independence of the entries of Z = (Z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , Zd), we have E [Hγ(Z)Hγ′(Z)] = γ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' d � j=1 δγj,γ′ j, where γ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' := �d j=1 γj!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='. The Hγ can also be defined explicitly as follows: e−∥x∥2/2Hγ(x) = (−1)|γ|∂γ � e−∥x∥2/2� , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3) where ∂γf(x) = ∂γ1 x1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' ∂γd xdf(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This leads to the useful Gaussian integration by parts identity, E [f(Z)Hγ(Z)] = E [∂γf(Z)], if f ∈ C|γ|(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The Hermite polynomials Hγ, γ ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' }d, form a complete orthogonal basis of the Hilbert space of functions f : Rd → R with inner product ⟨f, g⟩ = E [f(Z)g(Z)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In particular, if f : Rd → R satisfies E [f(Z)2] < ∞, then f has the following Hermite expansion: f(x) = � γ∈{0,1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' }d 1 γ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='cγ(f)Hγ(x), cγ(f) := E [f(Z)Hγ(Z)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4) Let rk(x) = f(x) − � |γ|≤k−1 1 γ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='cγ(f)Hγ(x) (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5) be the remainder of the Hermite series expansion of f after taking out the order ≤ k − 1 polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We can write rk as an integral of f against a kernel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Namely, define K(x, y) = � |γ|≤k−1 1 γ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='Hγ(x)Hγ(y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='6) Note that E [f(Z)K(x, Z)] = � |γ|≤k−1 1 γ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='cγ(f)Hγ(x) is the truncated Hermite series expansion of f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore, the remainder rk can be written as rk(x) = f(x) − E [f(Z)K(x, Z)] = E [(f(x) − f(Z))K(x, Z)], using that E [K(x, Z)] = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' B Exact Expression for the Remainder Lemma B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let k ≥ 1 and rk, K, be as in (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5), (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='6), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Assume that f ∈ C1, and that ∥∇f(x)∥ ≲ ec∥x∥2 for some 0 ≤ c < 1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then rk(x) = E [(f(x) − f(Z))K(x, Z)] = � 1 0 d � i=1 � |γ|=k−1 1 γ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='E [∂if((1 − t)Z + tx) (Hγ+ei(x)Hγ(Z) − Hγ+ei(Z)Hγ(x))]dt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) 22 The proof relies on the following identity: Lemma B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For each i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , d, it holds that K(x, y) = 1 xi − yi � |γ|=k−1 1 γ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (Hγ+ei(x)Hγ(y) − Hγ+ei(y)Hγ(x)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2) The proof of this identity is given at the end of the section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof of Lemma B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Write f(x) − f(Z) = � 1 0 (x − Z)T ∇f((1 − t)Z + tx)dt = d � i=1 � 1 0 (xi − Zi)∂if((1 − t)Z + tx)dt, (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3) so that, using (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2), we have E [(f(x) − f(Z))K(x, Z)] = d � i=1 � |γ|=k−1 1 γ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='E �� 1 0 ∂if((1 − t)Z + tx) (Hγ+ei(x)Hγ(Z) − Hγ+ei(Z)Hγ(x)) dt � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4) By assumption, sup t∈[0,1] |∂if((1 − t)Z + tx)| (|Hγ(Z)| + |Hγ+ei(Z)|) ≲ exp � c∥Z∥2 + 2c∥Z∥∥x∥ � (|Hγ(Z)| + |Hγ+ei(Z)|) (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5) for some 0 ≤ c < 1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The right-hand side is integrable with respect to the Gaussian measure, and therefore we can interchange the expectation and the integral in (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore, E [(f(x) − f(Z))K(x, Z)] = � 1 0 d � i=1 � |γ|=k−1 1 γ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='E [∂if((1 − t)Z + tx) (Hγ+ei(x)Hγ(Z) − Hγ+ei(Z)Hγ(x))]dt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='6) Proof of Lemma B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Without loss of generality, assume i = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To simplify the proof, we will also assume d = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The reader can check that the proof goes through in the same way for general d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' By the recursion relation (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) for 1d Hermite polynomials, we get that Hγ1+1,γ2(x) = x1Hγ1,γ2(x) − γ1Hγ1−1,γ2(x), where x = (x1, x2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Multiply this equation by Hγ(y) (where y = (y1, y2)) and swap x and y, to get the two equations Hγ1+1,γ2(x)Hγ1,γ2(y) = x1Hγ1,γ2(x)Hγ1,γ2(y) − γ1Hγ1−1,γ2(x)Hγ1,γ2(y), Hγ1+1,γ2(y)Hγ1,γ2(x) = y1Hγ1,γ2(x)Hγ1,γ2(y) − γ1Hγ1−1,γ2(y)Hγ1,γ2(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='7) Let Sγ1,γ2 = Hγ1+1,γ2(x)Hγ1,γ2(y) − Hγ1+1,γ2(y)Hγ1,γ2(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Subtracting the second equation of (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='7) from the first one, and using the Sγ1,γ2 notation, gives Sγ1,γ2 = (x1 − y1)Hγ1,γ2(x)Hγ1,γ2(y) + γ1Sγ1−1,γ2 (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='8) 23 and hence Sγ1,γ2 γ1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='γ2!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' = (x1 − y1)Hγ1,γ2(x)Hγ1,γ2(y) γ1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='γ2 + Sγ1−1,γ2 (γ1 − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='γ2!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='. Iterating this recursive relationship γ1 − 1 times, we get Sγ1,γ2 γ1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='γ2!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' = (x1 − y1) γ1−1 � j=0 Hγ1−j,γ2(x)Hγ1−j,γ2(y) (γ1 − j)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='γ2!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' + S0,γ2 0!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='γ2!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='9) Now, we have S0,γ2 = H1,γ2(x)H0,γ2(y) − H1,γ2(y)H0,γ2(x) = H1(x1)Hγ2(x2)Hγ2(y2) − H1(y1)Hγ2(y2)Hγ2(x2) = (x1 − y1)Hγ2(x2)Hγ2(y2) = (x1 − y1)H0,γ2(x)H0,γ2(y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='10) Therefore, S0,γ2 0!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='γ2!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' = (x1 − y1)Hγ1−j,γ2(x)Hγ1−j,γ2(y) (γ1 − j)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='γ2!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , j = γ1 so (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='9) can be written as Sγ1,γ2 γ1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='γ2!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' = (x1 − y1) γ1 � j=0 Hγ1−j,γ2(x)Hγ1−j,γ2(y) (γ1 − j)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='γ2!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' and hence 1 x1 − y1 � γ1+γ2=k−1 Sγ1,γ2 γ1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='γ2!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' = � γ1+γ2=k−1 γ1 � j=0 Hγ1−j,γ2(x)Hγ1−j,γ2(y) (γ1 − j)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='γ2!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' = � γ1+γ2≤k−1 Hγ1,γ2(x)Hγ1,γ2(y) γ1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='γ2!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' = K(x, y), (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='11) using the observation that {(γ1 − j, γ2) : γ1 + γ2 = k − 1, 0 ≤ j ≤ γ1} = {(˜γ1, γ2) : ˜γ1 + γ2 ≤ k − 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='12) Substituting back in the definition of Sγ1,γ2 gives the desired result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 Hermite Series Remainder in Tensor Form Using (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1), it is difficult to obtain an upper bound on |rk(x)|, since we need to sum over all γ of order k −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In this section, we obtain a more compact representation of rk in terms of a scalar product of k-tensors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We then take advantage of a very useful representation of the tensor of order-k Hermite polynomials, as an expectation of a vector outer product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' This allows us to bound the scalar product in the rk formula in terms of an operator norm rather than a Frobenius norm (the latter would incur larger d dependence).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' First, let us put all the unique kth order Hermite polynomials into a tensor of dk entries, some of which are repeating, enumerated by multi-indices α = (α1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , αk) ∈ [d]k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Here, [d] = {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , d}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We do so as follows: given α ∈ [d]k, define γ(α) = (γ1(α), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , γd(α)) by γj(α) = k � ℓ=1 1{αℓ = j}, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' γj(α) counts how many times index j appears in α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For this reason, we use the term counting index to denote indices of the form γ = (γ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , γd) ∈ {0, 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' }d, whereas we use the standard term “multi-index” 24 to refer to the α’s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that we automatically have |γ(α)| = k if α ∈ [d]k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Now, for x ∈ Rd, define H0(x) = 1 and Hk(x), k ≥ 1 as the tensor Hk(x) = {Hγ(α)(x)}α∈[d]k, x ∈ Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' When enumerating the entries of Hk, we write H(α) k to denote Hγ(α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that for each γ with |γ| = k, there are �k γ � α’s such that γ(α) = γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Example B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Consider the α = (i, j, j, k, k, k) entry of the tensor H6(x), where i, j, k ∈ [d] are all distinct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We count that i occurs once, j occurs twice, and k occurs thrice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Thus H(i,j,j,k,k,k) 6 (x) = H1(xi)H2(xj)H3(xk) = xi(x2 j − 1)(x3 k − 3xk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The first two tensors H1, H2 can be written down explicitly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For the entries of H1, we simply have H(i) 1 (x) = H1(xi) = xi, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' H1(x) = x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For the entries of H2, we have H(i,i) 2 (x) = H2(xi) = x2 i − 1 and H(i,j) 2 (x) = H1(xi)H1(xj) = xixj, i ̸= j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Thus H2(x) = xxT − Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We now group the terms in the Hermite series expansion (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4) based on the order |γ|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Consider all γ in the sum such that |γ| = k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We claim that � |γ|=k 1 γ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='cγ(f)Hγ(x) = 1 k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' � α∈[d]k cγ(α)(f)H(α) k (x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='13) Indeed, for a fixed γ such that |γ| = k, there are �k γ � α’s in [d]k for which γ(α) = γ, and the summands in the right-hand sum corresponding to these α’s are all identical, equalling cγ(f)Hγ(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Thus we obtain �k γ � copies of cγ(f)Hγ(x), and it remains to note that �k γ � /k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' = 1/γ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='. Analogously to Hk(x), define the tensor ck ∈ (Rd)⊗k, whose α’th entry is c(α) k = cγ(α) = E [f(Z)Hγ(α)(Z)] = E [f(Z)H(α) k (Z)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We then see that the sum (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='13) can be written as 1 k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='⟨ck, Hk(x)⟩, and hence the series expansion of f can be written as f(x) = ∞ � k=0 1 k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='⟨ck(f), Hk(x)⟩, ck(f) := E [f(Z)Hk(Z)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='14) The main result of this section is Lemma B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4 below, in which we express rk in terms of a tensor scalar product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' However, let us first prove the following lemma, which is needed to bound the term ⟨u ⊗ c3 ⊗ c4, E [Z ⊗ H3 ⊗ H4]⟩ in the preliminary bound (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let cp, cp+1 be symmetric tensors in (Rd)p and (Rd)p+1, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='Then ⟨u ⊗ cp ⊗ cp+1, E [Z ⊗ Hp(Z) ⊗ Hp+1(Z)]⟩ = (p + 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='⟨u ⊗ cp, cp+1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='15) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let T = E [Z ⊗ Hp ⊗ Hp+1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' First ,we characterize the non-zero entries of T using the counting index notation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In counting index notation, a typical entry of T takes the form E [ZiHγ(Z)Hγ′(Z)], where i ∈ [d], |γ| = p, and |γ′| = p + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Now, E [ZiHγ(Z)Hγ′(Z)] = E [ZiHγi(Zi)Hγ′ i(Zi)] � j̸=i E [Hγj(Zj)Hγ′ j(Zj)] = E [ZiHγi(Zi)Hγ′ i(Zi)] � j̸=i δγj,γj′ γj!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='16) For this to be nonzero, we must have γj = γj′ for all j ̸= i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' But since |γ| = p and |γ′| = p + 1, it follows that we must have γ′ i = γi + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='Hence γ′ = γ + ei, where ei is the ith unit vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To summarize, Ti,γ,γ′ is only 25 nonzero if γ′ = γ + ei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In this case, we have E [ZiHγ(Z)Hγ+ei(Z)] = E [ZiHγi(Zi)Hγi+1(Zi)] � j̸=i γj!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' = (γi + 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' � j̸=i γj!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' = (γ + ei)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='. (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='17) To get the second line we used the following recurrence relation for 1-d Hermite polynomials: xHk(x) = Hk+1(x) + kHk−1(x) for all k ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Now, we take the inner product (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='15) using counting index notation, recalling that each γ such that |γ| = p shows up in the tensor Hp exactly p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='/γ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' times: � u ⊗ cp ⊗ cp+1, E [Z ⊗ Hp(Z) ⊗ Hp+1(Z)] � = d � i=1 � |γ|=p � |γ′|=p+1 p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' γ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (p + 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (γ′)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' uicγcγ′E [ZiHγ(Z)Hγ′(Z)] = d � i=1 � |γ|=p p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' γ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (p + 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (γ + ei)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='uicγcγ+ei(γ + ei)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' = (p + 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' d � i=1 � |γ|=p p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' γ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='uicγcγ+ei = (p + 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' d � i=1 d � j1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=',jp=1 uicγ(j1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=',jp)cγ(i,j1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=',jp) = (p + 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' d � i=1 d � j1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=',jp=1 uic(j1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=',jp) p c(i,j1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=',jp) p+1 = ⟨u ⊗ cp, cp+1⟩ (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='18) Lemma B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let f satisfy the assumptions of Lemma B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1, and additionally, assume f ∈ Ck.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then the remainder rk, given as (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) in Lemma B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1, can also be written in the form rk(x) = � 1 0 (1 − t)k−1 (k − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' E �� ∇kf ((1 − t)Z + tx) , Hk(x) − Z ⊗ Hk−1(x) �� (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='19) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Recall that ∂γ := ∂γ1 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' ∂γd d , and that Hγ(z)e−∥z∥2/2 = (−1)|γ|∂γ(e−∥z∥2/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We then have for |γ| = k − 1, E [∂if((1 − t)Z + tx)Hγ(Z)] = (1 − t)k−1E [∂γ+eif], E [∂if((1 − t)Z + tx)Hγ+ei(Z)] = (1 − t)k−1E [∂γ+eifZi], (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='20) using the fact that f ∈ Ck.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We omitted the argument (1 − t)Z + tx from the right-hand side for brevity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To get the second equation, we moved only γ of the γ + ei derivatives from e−∥z∥2/2 onto ∂if, leaving −∂i(e−∥z∥2/2) = zi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Substituting these two equations into (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1), we get E [(f(x) − f(Z))K(x, Z)] = � 1 0 (1 − t)k−1 d � i=1 � |γ|=k−1 1 γ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='E [(∂γ+eif) (Hγ+ei(x) − ZiHγ(x))]dt = 1 (k − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' � 1 0 (1 − t)k−1 d � i=1 � |γ|=k−1 �k − 1 γ � E [(∂γ+eif) (Hγ+ei(x) − ZiHγ(x))]dt (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='21) 26 Now, define the sets A = {(i, γ + ei) : i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , d, γ ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' }d, |γ| = k − 1}, B = {(i, ˜γ) : ˜γ ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' }d, |˜γ| = k, ˜γi ≥ 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='22) It is straightforward to see that A = B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore, d � i=1 � |γ|=k−1 �k − 1 γ � E [∂γ+eif]Hγ+ei(x) = � |˜γ|=k � i: ˜γi≥1 � k − 1 ˜γ − ei � E [∂˜γf]H˜γ(x) = � |˜γ|=k � i: ˜γi≥1 �k ˜γ � ˜γi k E [∂˜γf]H˜γ(x) = � |˜γ|=k �k ˜γ � E [∂˜γf]H˜γ(x) = ⟨E [∇kf], Hk(x)⟩ (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='23) Next, note that � |γ|=k−1 �k − 1 γ � ∂γ∂ifHγ(x) = ⟨∇k−1∂if, Hk−1(x)⟩, and therefore d � i=1 � |γ|=k−1 �k − 1 γ � E [(∂γ+eif)Zi]Hγ(x) = E [⟨∇kf, Z ⊗ Hk−1(x)⟩].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='24) Substituting (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='23) and (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='24) into (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='21) gives rk(x) = E [(f(x) − f(Z))K(x, Z)] = � 1 0 (1 − t)k−1 (k − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' E �� ∇kf ((1 − t)Z + tx) , Hk(x) − Z ⊗ Hk−1(x) �� (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='25) In the next section, we obtain a pointwise upper bound on |rk(x)| in the case f = ¯W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In order for this bound to be tight in its dependence on d, we need a supplementary result on inner products with Hermite tensors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To motivate this supplementary result, consider bounding the inner product in (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='19) by the product of the Frobenius norms of the tensors on either side.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' As a rough heuristic, ∥∇kf∥F ∼ dk/2∥∇kf∥, where recall that ∥∇kf∥ is the operator norm of ∇kf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore, we would prefer to bound the inner product in terms of ∥∇kf∥ to get a tighter dependence on d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Apriori, however, this seems impossible, since Hk(x) is not given by an outer product of k vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' But the following representation of the order k Hermite polynomials will make this possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Hk(x) = E [(x + iZ)⊗k], (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='26) where Z ∼ N(0, Id).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Using (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='26), we can bound scalar products of the form ⟨∇kf, Hk(x)⟩ and ⟨∇kf, Z ⊗ Hk−1(x)⟩ in terms of the operator norm of ∇kf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' More generally, we have the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let T ∈ (Rd)⊗k be a k-tensor, and v ∈ Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then for all 0 ≤ ℓ ≤ k, we have |⟨T, v⊗ℓ ⊗ Hk−ℓ(x)⟩| ≲ ∥T∥∥v∥ℓ(∥x∥k−ℓ + d k−ℓ 2 ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Using (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='26), we have ⟨T, v⊗ℓ ⊗ Hk−ℓ(x)⟩ = E [⟨T, v⊗ℓ ⊗ (x + iZ)⊗(k−ℓ)⟩] 27 and hence |⟨T, v⊗ℓ ⊗ Hk−ℓ(x)⟩| ≤ E |⟨T, v⊗ℓ ⊗ (x + iZ)⊗k−ℓ⟩| ≤ ∥T∥∥v∥ℓE � ∥x + iZ∥k−ℓ� ≲ ∥T∥∥v∥ℓ(∥x∥k−ℓ + √ d k−ℓ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='27) B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2 Hermite-Related Proofs from Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2 In this section, we return to the setting in the main text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We let W satisfy all the assumptions from Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2, m ∈ Rd, σ ∈ Rd×d be such that ∥m∥, ∥σ∥ ≲ 1, and ¯W(x) = W(m + σx).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Also, let rk(x) = ¯W(x) − k−1 � j=0 1 j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' � cj( ¯W), Hj(x) � , (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='28) where cj( ¯W) = E [ ¯W(Z)Hj(Z)] as usual.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Combining Lemmas B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4 and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5 allows us to upper bound quantities of the form E [|rk(Y )|p] in terms of the operator norm of ∇kW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Corollary B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let rk be as in (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='28), the remainder of the Hermite series expansion of ¯W, where k = 3 if W ∈ C3 and k = 4 if W ∈ C4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let Y ∈ Rd be a random variable such that E [∥Y ∥s] < ∞ for all 0 ≤ s ≤ 2pk + 2pq, where q is from Assumption W2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then E [|rk(Y )|p] ≲ � dk N k−2 � p 2 �� E ∥Y/ √ d∥2kp + � E ∥Y/ √ d∥2(k−1)p + 1 � × � 1 + � E � ∥Y/ √ d∥2pq�� (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='29) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let ∇k ¯W be shorthand for ∇k ¯W((1 − t)Z + tY ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Using (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='19) for f = ¯W, we have |rk(Y )| ≲ � 1 0 E Z ���� ∇k ¯W, Hk(Y ) − Z ⊗ Hk−1(Y ) ���� dt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='30) Raising this inequality to the pth power and applying Jensen’s inequality twice, we have |rk(Y )|p ≲ � 1 0 E Z ���� ∇k ¯W, Hk(Y ) − Z ⊗ Hk−1(Y ) ���p� dt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='31) We now take the Y -expectation of both sides, and we are free to assume Y is independent of Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that the integrand on the right-hand side can be bounded by a∥Y ∥p(q+k) + b for some a and b, since ∥∇k ¯W((1 − t)Z + tY )∥ ≲ (1 + ∥Z∥ + ∥Y ∥)q by Assumption W2, and since the tensors Hk(Y ), Hk−1(Y ) are made up of at most order k polynomials of Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Since E [∥Y ∥pq+pk] < ∞ by assumption, we can bring the Y -expectation inside the integral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Hence E [|rk(Y )|p] ≲ � 1 0 E ���� ∇k ¯W, Hk(Y ) − Z ⊗ Hk−1(Y ) ���p� dt, (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='32) where the expectation is over both Z and Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Next, using Lemma B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5 we have ����⟨∇k ¯W, Hk(Y )−Z ⊗ Hk−1(Y )⟩ ���� p ≲ ∥∇k ¯W∥p � ∥Y ∥kp + d kp 2 + ∥Z∥p∥Y ∥(k−1)p + ∥Z∥pd (k−1)p 2 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='33) 28 Substituting this into (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='32) we have E [|rk(Y )|p] ≲ � 1 0 E ���∇k ¯W ��p � ∥Y ∥kp + d kp 2 + ∥Z∥p∥Y ∥(k−1)p + ∥Z∥pd (k−1)p 2 �� dt ≲ � 1 0 E � ∥∇k ¯W∥2p� 1 2 �� E ∥Y ∥2kp + d p 2 � E ∥Y ∥2(k−1)p + d kp 2 � dt ≤ d kp 2 �� E ∥Y/ √ d∥2kp + � E ∥Y/ √ d∥2(k−1)p + 1 � × � 1 0 E ���∇k ¯W ��2p� 1 2 dt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='34) We used Cauchy-Schwarz and the independence of Y and Z to get the second line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Finally, recall that ∇k ¯W = ∇k ¯W((1 − t)Z + tY ) and note that since ∥σ∥ ≲ 1, we have ∥∇k ¯W((1 − t)Z + tY )∥ ≲ ∥∇kW(m + (1 − t)σZ + tσY )∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We now apply Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4 with Y = m + (1 − t)σZ + tσY .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that E ����m + (1 − t)σZ + tσY ��/ √ d �2pq� ≲ 1 + E � ∥Y/ √ d∥2pq� and hence E ���∇k ¯W ��2p� 1 2 ≲ E ���∇kW(m + (1 − t)σZ + tσY ) ��2p� 1 2 ≲ � 1 + � E � ∥Y/ √ d∥2pq�� N p(1−k/2) (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='35) for all t ∈ [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Combining this inequality with (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='34) and noting that dkp/2N p(1−k/2) = (dk/N k−2)p/2 gives (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='29).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' C Proofs Related to Affine Invariance Recall the equations E [∇V (m + S1/2Z)] = 0, E [∇2V (m + S1/2Z)] = S−1 (EV ) and the definition of RV for a measure π ∝ e−V : RV = � (m, S) ∈ Rd×Sd ++ : S ⪯ 2H−1, ∥ √ H √ S∥2 + ∥ √ H(m − m∗)∥2 ≤ 8 � , (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) where m∗ = argminx∈Rd V (x) and H = ∇2V (m∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let V2(x) = V1(Ax + b) for some A ∈ Rd×d invertible and b ∈ Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then the pair (m1, S1) is a unique solution to (EV1) in the set RV1 if and only if the pair (m2, S2) given by m2 = A−1(m1 − b), S2 = A−1S1A−T (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2) is a unique solution to (EV2) in the set RV2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' It suffices to prove the following two statements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (1) If (m1, S1) ∈ RV1 solves (EV1) then (m2, S2) given by (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2) lies in RV2 and solves (EV2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (2) If (m2, S2) ∈ RV2 solves (EV2) then (m1, S1) given by m1 = Am2 + b, S1 = AS2AT lies in RV1 and solves (EV1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 29 We prove the first statement, and the second follows by a symmetric argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' So let (m1, S1) ∈ RV1 solve (EV1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We first show (m2, S2) given by (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2) solves (EV2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We have ∇V2(x) = AT ∇V1(Ax + b), ∇2V2(x) = AT ∇2V1(Ax + b)A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3) Note also that if σ = A−1S1/2 1 then σσT = A−1S1A−T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We therefore have E � ∇V2 � A−1(m1 − b) + � A−1S1A−T �1/2 Z � � = E � ∇V2 � A−1(m1 − b) + A−1S1/2 1 Z �� = AT E � ∇V1 � m1 + S1/2 1 Z �� = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4) Similarly, E � ∇2V2 � A−1(m1 − b) + � A−1S1A−T �1/2 Z � � = E � ∇2V2 � A−1(m1 − b) + A−1S1/2 1 Z �� = AT E � ∇2V1 � m1 + S1/2 1 Z �� A = AT S−1 1 A = S−1 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5) To conclude, we show (m2, S2) ∈ RV2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let m∗i be the global minimizer of Vi and Hi = ∇2V (m∗i), i = 1, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then m∗2 = A−1(m∗1 − b) and H2 = AT H1A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Since S1 ⪯ 2H−1 1 , it follows that S2 = A−1S1A−T ⪯ 2A−1H−1 1 A−T = 2H−1 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Furthermore, direct substitution shows that ∥ √ H2(m2 − m∗2)∥2 = (m2 − m∗2)T H2(m2 − m∗2) = (m1 − m∗1)T H1(m1 − m∗1) = ∥ √ H1(m1 − m∗1)∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='6) Finally, note that ∥ � H2 � S2∥2 = ∥ � S2H2 � S2∥ = sup u̸=0 uT √S2H2 √S2u ∥u∥2 = sup u̸=0 uT H2u ∥√S2 −1u∥2 = sup u̸=0 uT H2u uT S−1 2 u = sup u̸=0 uT H1u uT S−1 1 u = ∥ � H1 � S1∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='7) Therefore, ∥ � H2 � S2∥2 + ∥ √ H2(m2 − m∗2)∥2 = ∥ � H1 � S1∥2 + ∥ √ H1(m1 − m∗1)∥2 ≤ 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Recall that W(x) = nv � m∗ + √ nH −1x � , H = ∇2v(m∗), 30 and that N = nr, where r is from Assumption V1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The following preliminary calculation will be useful for showing Assumptions V1, V2 imply Assumptions W1, W2, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Given x ∈ Rd, let y = √α2 √ H −1x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We have √ N∥∇3W( √ Nx)∥ ≤ √ N √nα2 3 ���∇3(nv) � m∗ + √ nH −1√ Nx ���� = √r α2√α2 ���∇3v � m∗ + √r √ H −1x ���� = √r α2√α2 ����∇3v � m∗ + � r α2 y ����� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='8) Analogously, N∥∇4W( √ Nx)∥ ≤ N √nα2 4 ���∇4(nv) � m∗ + √ nH −1√ Nx ���� = r α2 2 ���∇4v � m∗ + √r √ H −1x ���� = r α2 2 ����∇4v � m∗ + � r α2 y ����� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='9) Lemma C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Assumptions V1, V2, and V3 imply Assumptions W1, W2, and W3 with N = nr, where r is from Assumption V1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let y = √α2 √ H −1x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that ∥y∥ ≤ ∥x∥ and in particular, if ∥x∥ ≤ 1 then ∥y∥ ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To show that V1 implies W1, note that by the above calculation we have √ N sup ∥x∥≤1 ∥∇3W( √ Nx)∥ ≤ √r α2√α2 sup ∥y∥≤1 ����∇3v � m∗ + � r α2 y ����� ≤ 1 2, (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='10) as desired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To show that W2 implies V2, fix x ∈ Rd and note that √ N∥∇3W( √ Nx)∥ ≤ √r α2√α2 ����∇3v � m∗ + � r α2 y ����� ≤ 1 + ∥y∥q ≤ 1 + ∥x∥q, (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='11) as desired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The calculation for the fourth derivative is analogous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To show that Assumption V3 implies W3, fix ∥x∥ ≥ √ N and let y = √ nH −1x, so that W(x) = nv(y + m∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that ∥y∥ ≥ ∥x∥/√nβ2 ≥ � N/(nβ2) = � r/β2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Hence we can apply Assumption V3 to conclude that W(x) = nv(m∗ + y) ≥ (d + 12q + 36) log(∥ � nβ2y∥) ≥ (d + 12q + 36) log(∥ √ nHy∥) = (d + 12q + 36) log ∥x∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='12) as desired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' D Logistic Regression Example Details of Numerical Simulation For the numerical simulation displayed in Figure 1, we take d = 2 and n = 100, 200, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , 1000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For each n, we draw ten sets of covariates xi, i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , n from N(0, λ2Id) with λ = √ 5, yielding ten posterior 31 distributions πn(· | x1:n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For each πn we compute the ground truth mean and covariance by directly evaluating the integrals, using a regularly spaced grid (this is feasible in two dimensions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The mode m∗ of πn is found by a standard optimization procedure, and the Gaussian VI estimates ˆm, ˆS are computed using the procedure described in [LCB+22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We used the authors’ implementation of this algorithm, found at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='com/marc-h-lambert/W-VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We then compute the Laplace and VI mean and covariance approximation errors for each n and each of the ten posteriors at a given n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The solid lines in Figure 1 depict the average approximation errors over the ten distributions at each n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The shaded regions depict the spread of the middle eight out of ten approximation errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Verifying the Assumptions As discussed in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3, we make the approximation v(z) ≈ v∞(z) = −E [Y log s(XT z) + (1 − Y ) log(1 − s(XT z)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Here, X ∼ N(0, λ2Id) and Y | X ∼ Bernoulli(s(X1)), since X1 = eT 1 X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Recall that s is the sigmoid, s(a) = (1 + ea)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Below, the parameters α2, β2, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' are all computed for the function v∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that z = e1 is the global minimizer of v∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We have ∇2v∞(z) = E [s′(XT z)XXT ] and in particular, ∇2v∞(e1) = E [s′(X1)XXT ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Also, s′(a) = s(a)(1 − σ(a)) = 1 2(1 + cosh(a)) ∈ (0, 1/4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To lower bound λmin(∇2v∞(e1)), note that for ∥u∥ = 1 we have uT ∇2v∞(e1)u = E [s′(X1)(XT u)2] = u2 1E [s′(X1)X2 1] + λ2 d � j=2 u2 jE [s′(X1)] ≥ s′(λ) � �u2 1E [X2 1{|X1| ≤ λ}] + λ2 d � j=2 u2 jP(|X1| ≤ λ) � � ≳ λ2s′(λ), (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) and hence α2 ≳ λ2s′(λ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Using that s′ ≤ 1/4, we also have the upper bound λmax(∇2v∞(e1)) ≤ λ2 4 = β2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2) Next, we need to upper bound ∥∇3v∞∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We have ∇3v∞(z) = E [s′′(XT z)X⊗3], s′′(a) = s(a)(1 − s(a))(1 − 2s(a)), so that ∥∇3v∞(z)∥ = sup ∥u1∥=∥u2∥=∥u3∥=1 E � s′′(XT z) 3 � k=1 (uT k X) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' One can show that s′′(a) ∈ [−1, 1] for all a ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Hence E � s′′(XT z) 3 � k=1 (uT k X) � ≤ E � 3 � k=1 |uT k X| � ≤ 3 � k=1 E � |uT k X|3�1/3 ≤ 2λ3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3) Here, we used that uT k X d= N(0, λ2), whose third absolute moment is bounded by 2λ3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We therefore get the bound ∥∇3v∞(z)∥ ≤ β3 := 2λ3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4) 32 Note that this constant bound holds for all z ∈ Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Next, we need to find r such that sup ∥z−m∗∥≤√ r/α2 ∥∇3v∞(z)∥ ≤ α3/2 2 2√r .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Using the uniform bound (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4) on ∥∇3v∞∥, it suffices to take r such that β3 = α3/2 2 2√r =⇒ r = α3 2 4β2 3 ≳ s′(λ)3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5) Finally, we verify Assumption V3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To do so, recall that v∞ is convex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore, if y lies on the line segment between 0 and z, with ∥y∥ = � r/β2 < ∥z∥, then v∞(m∗ + z) − v∞(m∗) ≥ ∥z∥ � r/β2 (v∞(m∗ + y) − v∞(m∗)) ≥ � β2/r inf ∥y∥=√ r/β2 [v∞(m∗ + y) − v∞(m∗)] ∥z∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='6) It is clear that if λ is a constant then the parameters in this inequality, as well as the infimum, are lower bounded by absolute constants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore, since ∥z∥ ≥ log ∥z∥, Assumption V3 is satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' E Proofs from Section 5 The proofs in this section rely on tensor-matrix and tensor-vector scalar products.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let us review the rules of such scalar products, and how to bound the operator norms of these quantities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let v ∈ Rd, A ∈ Rd×d, and T ∈ Rd×d×d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We define the vector ⟨T, A⟩ ∈ Rd and the matrix ⟨T, v⟩ ∈ Rd×d by ⟨T, A⟩i = d � j,k=1 TijkAjk, i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , d, ⟨T, v⟩ij = d � k=1 Tijkvk, i, j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1) We will always sum over the last two or last one indices of the tensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that the norm of the matrix ⟨T, v⟩ is given by ∥⟨T, v⟩∥ = sup∥u∥=∥w∥=1 uT ⟨T, v⟩w, and we have uT ⟨T, v⟩w = d � i,j=1 uiwj d � k=1 Tijkvk = ⟨T, u ⊗ w ⊗ v⟩ ≤ ∥T∥∥v∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Therefore, ∥⟨T, v⟩∥ ≤ ∥T∥∥v∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We also review the notion of operator norm for derivatives of a function, and note the distinction between this kind of operator norm and the standard tensor operator norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Specifically, consider a C2 function f = (f1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , fd) : Rd×d ×Rd → Rd, where Rd×d is endowed with the standard matrix norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then ∇σf(σ, m) is a linear functional from Rd×d to Rd, and we let ⟨∇σf(σ, m), A⟩ ∈ Rd denote the application of ∇σf(σ, m) to A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that we can represent ∇σf by the d × d × d tensor (∇σjkfi)d i,j,k=1, so that ⟨∇σf(σ, m), A⟩ coincides with the definition given above of tensor-matrix scalar products.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' However, ∥∇σf∥op is not the standard tensor operator norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Rather, ∥∇σf∥op = sup A∈Rd×d,∥A∥=1 ∥⟨∇σf, A⟩∥ = sup A∈Rd×d,∥A∥=1, u∈Rd,∥u∥=1 ⟨∇σf, A ⊗ u⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We continue to write ∥∇σf∥ to denote the standard tensor operator norm, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' ∥∇σf∥ = sup u,v,w∈Rd, ∥u∥=∥v∥=∥w∥=1 ⟨∇σf, u ⊗ v ⊗ w⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 33 Note also that ∇mf ∈ Rd×d is a matrix, and that max � ∥∇σf(σ, m)∥op , ∥∇mf(σ, m)∥op � ≤ ∥∇f(σ, m)∥op ≤ ∥∇σf(σ, m)∥op + ∥∇mf(σ, m)∥op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2) Finally, recall the notation Br(0, 0) = {(σ, m) ∈ Rd×d × Rd : ∥σ∥2 + ∥m∥2 ≤ r2}, Br = {σ ∈ Rd×d : ∥σ∥ ≤ r}, Sc1,c2 = {σ ∈ Sd + : c1Id ⪯ σ ⪯ c2Id}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3) Lemma E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let f = (f1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' , fd) : Rd×d×Rd → Rd be C3, where Rd×d is the set of d×d matrices, endowed with the standard matrix operator norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Suppose f(0, 0) = 0, ∇σf(0, 0) = 0, ∇mf(σ, m) is symmetric for all m, and ∇mf(0, 0) = Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let r > 0 be such that sup (σ,m)∈Br(0,0) ∥∇f(σ, m) − ∇f(0, 0)∥op ≤ 1 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4) Then for each σ ∈ Rd×d such that ∥σ∥ ≤ r/2 there exists a unique m = m(σ) ∈ Rd such that f(σ, m(σ)) = 0 and (σ, m(σ)) ∈ Br(0, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Furthermore, the map σ �→ m(σ) is C2, with 1 2Id ⪯ ∇mf(σ, m) �� m=m(σ) ⪯ 3 2Id, ∥∇σm(σ)∥op ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5) The proof uses the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2 (Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3 in Chapter XIV of [Lan93]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let U be open in a Banach space E, and let f : U → E be of class C1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Assume that f(0) = 0 and f ′(0) = I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let r > 0 be such that ¯Br(0) ⊂ U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' If |f ′(z) − f ′(x)| ≤ s, ∀z, x ∈ ¯Br(0) for some s ∈ (0, 1), then f maps ¯Br(0) bijectively onto ¯B(1−s)r(0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof of Lemma E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let φ : Rd×d × Rd → Rd×d × Rd be given by φ(σ, m) = (σ, f(σ, m)), so that φ(0, 0) = (0, 0), and ∇φ(σ, m) = � Id×d 0 ∇σf(σ, m) ∇mf(σ, m) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='6) For each (σ, m), (σ′, m′) ∈ Br(0, 0), we have ∥∇φ(σ, m) − ∇φ(σ′, m′)∥op = ∥∇f(σ, m) − ∇f(σ′, m′)∥op ≤ 2 sup (σ,m)∈Br(0,0) ∥∇f(σ, m) − ∇f(0, 0)∥op ≤ 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='7) Note also that ∇φ(0, 0) is the identity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Thus by Lemma E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2, we have that φ is a bijection from Br(0, 0) to Br/2(φ(0, 0)) = Br/2(0, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Now, fix any σ ∈ Rd×d such that ∥σ∥ ≤ r/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then (σ, 0) ∈ Br/2(0, 0), and hence there exists a unique (σ′, m) ∈ Br(0, 0) such that (σ, 0) = φ(σ′, m) = (σ′, f(σ′, m)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Thus σ = σ′ and f(σ, m) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In other words, for each σ such that ∥σ∥ ≤ r/2 there exists a unique m = m(σ) such that (σ, m(σ)) ∈ Br(0, 0) and such that 0 = f(σ, m).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The map σ �→ m(σ) is C2 by standard Implicit Function Theorem arguments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To show that the first inequality of (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5) holds, note that we have ∥∇mf(σ, m(σ)) − ∇mf(0, 0)∥op ≤ ∥∇f(σ, m(σ)) − ∇f(0, 0)∥op ≤ 1/4 ≤ 1/2 34 by (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4) since we know that (σ, m(σ)) ∈ Br(0, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Thus, Id = ∇2W(0) = ∇mf(0, 0) =⇒ 1 2Id ⪯ ∇mf(σ, m(σ)) ⪯ 3 2Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='8) To show the second inequality of (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5), we first need the supplementary bound ∥∇σf(σ, m(σ))∥op = ∥∇σf(σ, m(σ)) − ∇σf(0, 0)∥op ≤ 1/2 (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='9) which holds by the same reasoning as above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Now, ∂σjkm = −∇mf(σ, m)−1∂σjkf(σ, m) ∈ Rd by standard Implicit Function Theorem arguments, where ∇mf(σ, m) is a matrix, ∂σjkf(σ, m) is a vector, and ∇σm, ∇σf are linear maps from Rd×d to Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Hence by the first inequality in (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5) combined with (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='9) we have ∥∇σm(σ)∥op = sup ∥A∥=1 ∥⟨∇σm(σ), A⟩∥ = sup ∥A∥=1 ∥∇mf(σ, m)−1 d � j,k=1 ∂σjkf(σ, m)Ajk∥ = sup ∥A∥=1 ∥∇mf(σ, m)−1⟨∇σf, A⟩∥ ≤ ∥∇mf(σ, m)−1∥∥∇σf∥op ≤ 2 × 1 2 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='10) Lemma E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let f : Rd×d × Rd → Rd be given by f(σ, m) = E [∇W(σZ + m)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then all the conditions of Lemma E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 are satisfied;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' in particular, (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4) is satisfied with r = 2 √ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Thus the conclusions of Lemma E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 hold with this choice of r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that f is C2 thanks to the fact that W is C3 and ∇W grows polynomially by Assumption W2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We then immediately have f(0, 0) = ∇W(0) = 0, ∇mf(σ, m) = E [∇2W(m + σZ)] is symmetric for all m, σ, and ∇mf(0, 0) = ∇2W(0) = Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To show ∇σf(0, 0) = 0, we compute the i, j, k term of this tensor: ∂σjkfi = ∂σjkE [∂iW(m + σZ)] = E [∂2 ijW(m + σZ)Zk], so that ∂σjkfi(0, 0) = E [∂2 i,jW(0)Zk] = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' It remains to show that for r = 2 √ 2 we have sup (σ,m)∈Br(0,0) ∥∇f(σ, m) − ∇f(0, 0)∥op ≤ 1 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='11) First, note that sup (σ,m)∈Br(0,0) ∥∇f(σ, m) − ∇f(0, 0)∥op ≤ r sup (σ,m)∈Br(0,0) ∥∇2f(σ, m)∥op, where ∇2f(σ, m) is a bilinear form on (Rd×d × Rd)2, and we have ∥∇2f(σ, m)∥op ≤ ∥∇2 σf(σ, m)∥op + 2∥∇σ∇mf(σ, m)∥op + ∥∇2 mf(σ, m)∥op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' For f(σ, m) = E [∇W(σZ + m)], these second order derivatives are given by ∂2 mi,mjf(σ, m) = E [∂2 i,j∇W(m + σZ)], ∂mi∂σjkf(σ, m) = E [∂2 i,j∇W(m + σZ)Zk], ∂2 σjk,σℓpf(σ, m) = E [∂2 j,ℓ∇W(m + σZ)ZkZp], (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='12) 35 each a vector in Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' From the first line, we get that ∥∇2 mf(σ, m)∥op ≤ E ∥∇3W(m + σZ)∥, where ∥∇3W∥ is the standard tensor norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' From the second line, we get ∥∇m∇σf(σ, m)∥op = sup ∥A∥=1,∥x∥=1 ����E � d � i,j,k=1 ∂2 i,j∇W(m + σZ)ZkxiAjk ����� = sup ∥A∥=1,∥x∥=1 ����E � d � i,j=1 ∂2 i,j∇W(m + σZ)xi(AZ)j ����� = sup ∥A∥=1,∥x∥=1 ����E �� ∇3W(m + σZ), x ⊗ AZ ������ ≤ sup ∥A∥=1,∥x∥=1 E � ∥x∥∥AZ∥∥∇3W(m + σZ)∥ � ≤ √ d � E [∥∇3W(m + σZ)∥2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='13) A similar computation gives ∥∇2 σf(σ, m)∥op ≤ sup ∥A∥=1,∥B∥=1 E [∥AZ∥∥BZ∥∥∇3W(m + σZ)∥] ≤ 2d � E [∥∇3W(m + σZ)∥2] ≲ d/ √ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='14) Thus overall we have ∥∇2f(σ, m)∥op ≤ (2d + 2 √ d + 1) � E [∥∇3W(m + σZ)∥2] ≤ 5d sup (σ,m)∈Br(0,0) � E [∥∇3W(m + σZ)∥2] (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='15) and hence sup (σ,m)∈Br(0,0) ∥∇f(σ, m) − ∇f(0, 0)∥op ≤ 5rd sup (σ,m)∈Br(0,0) � E [∥∇3W(m + σZ)∥2] ≤ 10 √ 2d √ N ( √ 3 + � (2q)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' ), (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='16) where in the last line we applied Lemma E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5 and substituted r = 2 √ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' To conclude, recall that ( √ 3 + � (2q)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' )/ √ N ≤ 1/(40 √ 2d) by the assumption in the statement of Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let r = 2 √ 2 and σ ∈ S0,r/2 �→ m(σ) ∈ Rd be the restriction to symmetric nonnegative matrices of the map furnished by Lemmas 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='2 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then the function F given by F(σ) = E [∇2W(m(σ) + σZ)]−1/2 is well-defined and a strict contraction on S0,r/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Moreover, F(S0,r/2) ⊆ Sc1,c2 ⊆ S0,r/2, where c1 = � 2/3, c2 = √ 2 = r/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' First, let G(σ) = E [∇2W(m(σ) + σZ)] and f(σ, m) = E [∇W(m + σZ)] as in Lemma E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Note that ∇mf(σ, m) = E [∇2W(σZ + m)], so that G(σ) = ∇mf(σ, m)|m=m(σ) and hence by (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5) of Lemma E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1 we have 1 2Id ⪯ G(σ) ⪯ 3 2Id, ∀σ ∈ S0,r/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='17) 36 But then G(σ) has a unique invertible symmetric positive definite square root, and we define F(σ) = G(σ)−1/2 to be the inverse of this square root.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Moreover, using (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='17), it follows that c1Id ⪯ F(σ) ⪯ c2Id, ∀σ ∈ S0,r/2, where c1 = � 2/3 and c2 = √ 2 = r/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In other words, F(S0,r/2) ⊆ Sc1,c2 ⊆ S0,r/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' It remains to show F is a contraction on S0,r/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let σ1, σ2 ∈ S0,r/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We will first bound ∥G(σ1) − G(σ2)∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We have ∥G(σ1) − G(σ2)∥ ≤ ∥σ1 − σ2∥ sup σ∈S0,r/2 ∥∇σG(σ)∥op, (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='18) and ∥∇σG(σ)∥op = sup ∥A∥=1 ��� ∇σG(σ), A ��� = sup ∥A∥=1 ����E � ∇3W, � A, ∇σ (m(σ) + σZ) ������.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='19) Here, the quantities inside of the ∥ · ∥ on the right are matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Indeed, ⟨∇σG, A⟩ denotes the application of ∇σG to A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Since G sends matrices to matrices, ∇σG is a linear functional which also sends matrices to matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In the third line, ∇σ(m(σ) + σZ) should be interpreted as a linear functional from Rd×d to Rd, so ⟨A, ∇σ(m(σ) + σZ)⟩ is a vector in Rd, and the inner product of this vector with the d × d × d tensor ∇3W is a matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Using that ∥⟨T, x⟩∥ ≤ ∥T∥∥x∥, as explained at the beginning of this section, we have ���� � ∇3W, � A, ∇σ (m(σ) + σZ) ������ ≤ ��∇3W �� ∥⟨A, ∇σ (m(σ) + σZ)⟩∥ ≤ ∥∇3W∥∥∇σ(m(σ) + σZ)∥op = ∥∇3W∥∥∇σm(σ) + Z ⊗ Id∥op ≤ ∥∇3W∥(1 + ∥Z∥).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='20) To get the last bound, we used that ∥∇σm(σ)∥op ≤ 1, shown in Lemma E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We also use the fact that ∥Z ⊗ Id∥op = sup∥A∥=1 ∥ ⟨A, Z ⊗ Id⟩ ∥ = sup∥A∥=1 ∥AZ∥ = ∥Z∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (Recall that since Z ⊗ Id is part of ∇σm, we are considering Z ⊗ Id as an operator on matrices rather than as a d × d × d tensor, and this is why we take the supremum over matrices A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=') Substituting (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='20) back into (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='18), we get ∥G(σ1) − G(σ2)∥ ≤ ∥σ1 − σ2∥ sup σ∈S0,r/2 E ���∇3W (m(σ) + σZ) �� (1 + ∥Z∥) � ≤ ∥σ1 − σ2∥ √ 2(1 + √ d) √ 3 + � (2q)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' √ N ≤ ∥σ1 − σ2∥1 + √ d 40d .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='21) The second inequality is by Cauchy-Schwarz and Lemma E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5 below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The third inequality uses that ( √ 3 + � (2q)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' )/ √ N ≤ 1/(40 √ 2d), by the assumption in the statement of Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Now, note that thanks to Lemma E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='3, both λmin(G(σ1)) and λmin(G(σ2)) are bounded below by 1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Using Lemma E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='6, we therefore have ∥F(σ1) − F(σ2)∥ ≤ √ 2∥G(σ1) − G(σ2)∥ ≤ 1 + √ d 20 √ 2d ∥σ1 − σ2∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='22) Hence F is a strict contraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lemma E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Assume 4 √ 2 ≤ � N/d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then sup (σ,m)∈B2 √ 2(0,0) E [∥∇3W(m + σZ)∥2] ≤ (3 + (2q)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' )/N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 37 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Fix ∥m∥, ∥σ∥ ≤ 2 √ 2, so that 2∥m∥ ≤ √ N and 2∥σ∥ √ d ≤ √ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' By Assumption W2, we have NE [∥∇3W(m + σZ)∥2] ≤ 2 + 2E � ∥(m + σZ)/ √ N∥2q� ≤ 2 + 22q ��∥m∥ √ N �2q + � ∥σ∥ √ N �2q E [∥Z∥2q] � ≤ 2 + �2∥m∥ √ N �2q + � 2∥σ∥ √ d √ N �2q (2q)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' ≤ 3 + (2q)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='. (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='23) Lemma E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Let A0 and A1 be psd, and A1/2 0 , A1/2 1 their unique psd square roots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Assume without loss of generality that λmin(A0) ≤ λmin(A1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Then ∥A−1/2 1 − A−1/2 0 ∥ ≤ ∥A1 − A0∥ 2λmin(A0)3/2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' First note that A−1/2 1 − A−1/2 0 = A−1/2 1 (A1/2 0 − A1/2 1 )A−1/2 0 and hence ∥A−1/2 1 − A−1/2 0 ∥ ≤ ∥A−1/2 1 ∥∥A−1/2 1 ∥∥A1/2 1 − A1/2 0 ∥ ≤ ∥A1/2 1 − A1/2 0 ∥ λmin(A0) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Now, define At = A0 + t(A1 − A0) and let Bt = A1/2 t , where Bt is the unique psd square root of At.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We then have ∥A1/2 1 − A1/2 0 ∥ ≤ supt∈[0,1] ∥ ˙Bt∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' We will now express ˙Bt in terms of ˙At and Bt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Differentiating B2 t = At, we get Bt ˙Bt + ˙BtBt = ˙At = A1 − A0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' (E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='24) Now, one can check that the solution ˙Bt to this equation is given by ˙Bt = � ∞ 0 e−sBt(A1 − A0)e−sBtds and hence ∥ ˙Bt∥ ≤ ∥A1 − A0∥ � ∞ 0 ∥e−sBt∥2dt = ∥A1 − A0∥ 2λmin(Bt) = ∥A1 − A0∥ 2 � λmin(At) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Now note that λmin(At) ≥ λmin(A0), since At is just a convex combination of A0 and A1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Hence ∥ ˙Bt∥ ≤ ∥A1 − A0∥/2 � λmin(A0) for all t ∈ [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Combining all of the above estimates gives ∥A−1/2 1 − A−1/2 0 ∥ ≤ ∥A1 − A0∥ 2λmin(A0)3/2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' References [AR20] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Alquier and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Ridgway.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Concentration of tempered posteriors and of their variational approx- imations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The Annals of Statistics, 48(3):1475–1497, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' [BKM17] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Blei, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Kucukelbir, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' McAuliffe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Variational inference: A review for statisticians.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Journal of the American Statistical Association, 112(518):859–877, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' [CC96] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Cowles and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Carlin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Markov chain Monte Carlo convergence diagnostics: A compar- ative review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Journal of the American Statistical Association, 91(434):883–904, 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 38 [DD21] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Daudel and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Douc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Mixture weights optimisation for alpha-divergence variational inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, volume 34, pages 4397–4408, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' [DDP21] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Daudel, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Douc, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Portier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Infinite-dimensional gradient-based descent for alpha- divergence minimisation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The Annals of Statistics, 49(4):2250–2270, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' [HY19] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Han and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Statistical inference in mean-field variational Bayes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' arXiv preprint arXiv:1911.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='01525, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' [Kat23] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Katsevich.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The dimension dependence of the Laplace approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' In preparation, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' [KGB22] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Kasprzak, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Giordano, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Broderick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' How good is your Gaussian approximation of the posterior?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Finite-sample computable error bounds for a variety of useful divergences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' arXiv preprint arXiv:2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='14992, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' [Lan93] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Real and Functional Analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Graduate Texts in Mathematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Springer New York, NY, 3 edition, 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' [LCB+22] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lambert, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Chewi, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Bach, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Bonnabel, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Rigollet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Variational inference via Wasser- stein gradient flows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='15902, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' [Leb72] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Lebedev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Special Functions and Their Applications,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Dover Publications, 1972.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' [Spo22] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Spokoiny.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Dimension free non-asymptotic bounds on the accuracy of high dimensional Laplace approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' arXiv preprint arXiv:2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content='11038, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' [VdV00] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Van der Vaart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Asymptotic statistics, volume 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Cambridge university press, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' [WB19] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Wang and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Blei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Frequentist consistency of variational Bayes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Journal of the American Statistical Association, 114(527):1147–1161, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' [ZG20] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Zhang and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Gao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' Convergence rates of variational posterior distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' The Annals of Statistics, 48(4):2180–2207, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} +page_content=' 39' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/9dA0T4oBgHgl3EQfO_9E/content/2301.02168v1.pdf'} diff --git a/AdFQT4oBgHgl3EQfMTYr/content/tmp_files/2301.13267v1.pdf.txt b/AdFQT4oBgHgl3EQfMTYr/content/tmp_files/2301.13267v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..bdadecc8f3abc32132ebe492d6dbb244d74da530 --- /dev/null +++ b/AdFQT4oBgHgl3EQfMTYr/content/tmp_files/2301.13267v1.pdf.txt @@ -0,0 +1,1613 @@ +A R C H I S O U N D : A U D I O G E N E R AT I O N W I T H D I F F U S I O N +flavio schneider +Master’s Thesis +Supervised by Zhijing Jin, Prof. Bernhard Schölkopf +ETH Zurich +January 2023 + + +A B S T R A C T +The recent surge in popularity of diffusion models for image gener- +ation has brought new attention to the potential of these models in +other areas of media generation. One area that has yet to be fully ex- +plored is the application of diffusion models to audio generation. Au- +dio generation requires an understanding of multiple aspects, such +as the temporal dimension, long term structure, multiple layers of +overlapping sounds, and the nuances that only trained listeners can +detect. In this work, we investigate the potential of diffusion models +for audio generation. We propose a set of models to tackle multiple +aspects, including a new method for text-conditional latent audio dif- +fusion with stacked 1D U-Nets, that can generate multiple minutes +of music from a textual description. For each model, we make an +effort to maintain reasonable inference speed, targeting real-time on +a single consumer GPU. In addition to trained models, we provide a +collection of open source libraries with the hope of simplifying future +work in the field. Samples can be found at bit.ly/audio-diffusion. +iii + + +C O N T E N T S +1 +introduction +1 +1.1 +Audio Generation +1 +1.2 +Challenges +1 +1.3 +Existing Methods +2 +1.4 +Research Questions +2 +1.5 +Contributions +4 +1.5.1 +Models +4 +1.5.2 +Libraries +4 +1.6 +Structure of the Thesis +5 +2 +audio representation +7 +2.1 +Desirable Properties +7 +2.1.1 +Compressibility +7 +2.1.2 +Decodability +7 +2.1.3 +Diffuseability +7 +2.2 +Waveform +8 +2.3 +Spectrograms +8 +2.3.1 +STFT +8 +2.3.2 +MEL +10 +3 +existing diffusion methods +11 +3.1 +DDPM-Diffusion +11 +3.1.1 +Noising (0 → t) +12 +3.1.2 +Denoising (t − 1 ← t) +13 +3.1.3 +Training Objective +13 +3.1.4 +Sampling +14 +3.1.5 +Limitations +14 +3.2 +DDIM +14 +3.3 +V-Diffusion +15 +3.3.1 +Noising (0 → σt) +15 +3.3.2 +Denoising (σt−1 ← σt) +16 +3.3.3 +Training Objective +16 +3.3.4 +Sampling (σ0 = 0 ← · · · ← σt−1 ← σt = +1) +16 +4 +architectures +17 +4.1 +Our a-unet Library +17 +4.1.1 +Background of U-Net +17 +4.1.2 +U-Net Block +17 +4.1.3 +Items +19 +4.1.4 +Plugins +20 +4.2 +Our audio-encoders-pytorch Library +21 +5 +models +23 +5.1 +Overview +23 +5.2 +Diffusion Unconditional Generator +23 +v + +vi +contents +5.2.1 +Motivation +23 +5.2.2 +Method +23 +5.2.3 +Diffusion Method +24 +5.2.4 +Transforms +25 +5.2.5 +Usage +26 +5.2.6 +Evaluation +27 +5.3 +Text-conditional Diffusion +27 +5.3.1 +Motivation +27 +5.3.2 +Method +28 +5.3.3 +Evaluation +29 +5.4 +Diffusion Auto-Encoders with Latent Diffusion +30 +5.4.1 +Motivation +30 +5.4.2 +Method +30 +5.4.3 +Evaluation +31 +5.5 +Diffusion Upsampler +32 +5.5.1 +Motivation +32 +5.5.2 +Method +32 +5.5.3 +Evaluation +33 +5.6 +Diffusion Vocoder +34 +5.6.1 +Motivation +34 +5.6.2 +Method +34 +5.6.3 +Evaluation +35 +5.7 +Training Info +35 +5.7.1 +Data +35 +5.7.2 +Training +35 +6 +future work +37 +7 +conclusion +39 +bibliography +41 + +1 +I N T R O D U C T I O N +Music is an art of time at the intersection of fine-grained perception +and symbolic patter recognition. In this work, we will investigate the +use of diffusion model to generate music, or more broadly audio, +in order to gain a deeper understanding of this intersection using +modern deep learning diffusion models. +1.1 +audio generation +Audio generation refers to the process of automatically synthesizing +novel waveforms using deep learning models. Audio generation has +been commonly approached in two different ways: symbolically or +at the waveform level. Symbolically generating audio involves creat- +ing a representation of the audio using symbols, such as MIDI data, +which can then be converted into an audio waveform. This method +is often easier to work with, but it can be difficult to capture all the +nuanced details of a sound using symbols. Waveform-based audio +generation, on the other hand, involves generating the raw audio +waveform directly. This method is more complex, due to the sheer +amount of values that have to be generated per second, but it allows +for a more precise and detailed representation of sound, that includes +all of its intricacies. Furthermore, audio generation can be uncondi- +tional or conditional. Unconditional models are trained only on audio +data and are able to generate new samples without any additional in- +put. Conditional models, on the other hand, are trained on pairs of +audio data and some kind of conditioning information, such as a text +description, genre label, lyrics, speaker id, or some other description +of the audio. At inference time, this conditioning information can be +used to guide the generation of novel audio samples that match the +desired characteristics. In this thesis, we will explore methods of con- +ditional and unconditional waveform-level generation. +1.2 +challenges +Multiple tradeoffs have to be considered when generating audio at +the waveform level. To generate a single second of high quality 48kHz +stereo audio, 96000 values must be generated, which is comparable +in size to a medium resolution image. If the goal is to generate an +entire song (hundreds of seconds) maintaining high-quality and a rea- +sonable generation speed, this task becomes much more challenging. +A common approach to generating long audio sequences is to do so +1 + +2 +introduction +in chunks, however, if the context length, or the amount of audio that +the model can consider at any given time is not sufficient, the result- +ing structure may not be consistent over multiple seconds or minutes +of generation. A longer context may allow for more consistent coarse +structure, but may also lead to lower overall quality of detail or vice- +versa. +1.3 +existing methods +In this section, we will review some of the most well-known or influ- +ential waveform-based methods that have been developed to date. +One of the pioneering waveform level generation models is WaveNet +(2016 [8]), a fully convolutional architecture that exploits dilated con- +volutions with various dilation factors in order to capture a large con- +text. It’s able to synthesize a few seconds of both speech and classical +piano music at 16kHz. Jukebox (2020 [2]) uses multiple quantized au- +toencoders to discretize sounds at 3 different resolutions, followed by +a cascade of transformer upsampler models to generate the quantized +representations autoregressively. Jukebox is able to generate 44kHz +music conditioned on lyrics, artists and genres. The stack of trans- +formers trades-off generation speed for structure and quality. Audi- +oLM (2022 [1]) uses a (residual) quantized autoencoder to compress +the waveform into discrete tokens and a semantic encoder, later a cas- +cade of transformer decoders (semantic, coarse, fine) is used to gener- +ate 16kHz audio continuations top-down from the semantic represen- +tation. Musika (2022) trains a set of 1D convolutional autoencoders to +compress log-magnitude spectrograms, and a vocoder to reconstruct +both phase and magnitude from the compressed representation, us- +ing a 2D GAN discriminator trained on sequential chunks of audio +exploits this process autoregressively to generate longer sequences of +44kHz audio. This method has a limited context length, but is very +efficient given the 1D structure of convolutions. Riffusion1 (2022) fine- +tunes the Stable Diffusion model [12] on chunks of mel-spectrograms +of 5s at 44kHz, and uses style transfer to generate multiple coherent +concatenated images while conditioning on a textual description of +the song. This method has a limited 5s context length, and trades off +speed given the large 2D architecture, but works surprisingly well +considering that the original model is trained on images, not audio. +1.4 +research questions +Diffusion models have recently demonstrated exceptional capabilities +in the field of image generation [11, 12], leading to an explosion of +incredible AI generated art 2. Iteratively removing small amounts of +1 https://www.riffusion.com/about +2 https://www.midjourney.com/showcase/ + +1.4 research questions +3 +noise from pure noise allows diffusion models to hallucinate novel +samples that have common attributes to the data in the training set. +Compared to GANs, diffusion models in the image domain don’t +suffer from training instability, scale well with parameter size, and +have good mode coverage. +As long as the training data can be progressively corrupted from a +clean to a fully covered state, diffusion models have the potential to +be applied to multiple domains to generate novel samples. This opens +up a wide range of possibilities beyond image generation, including +video and audio generation. +In this thesis, we explore the potential of diffusion models for audio +generation. We will explore whether diffusion models can be used +on audio as effectively as with images. The aim is to generate high- +quality 48kHz stereo audio as efficiently as possible and to control the +generation in different ways, with a focus on text-conditional audio +generation. + +4 +introduction +1.5 +contributions +1.5.1 +Models +We introduce the following models, some of which are/will be acces- +sible in the archisound library: +• Long: a latent diffusion model for text-conditional music genera- +tion that is capable of generating audio with an extended con- +text of multiple minutes at 48kHz, targeting context length and +structure (∼857M parameters). +• Crisp: a text-conditional audio generation diffusion model with +a context of tens of seconds at 48kHz, targeting simplicity and +high-quality waveforms (∼419M parameters). +• Upsampler: a diffusion model to uspsample music from 3kHz to +48kHz (∼238M parameters). +• Vocoder: A diffusion model to reconstruct 48kHz waveforms +from 80-channel mel-spectrograms, variable input length (∼178M +parameters). +1.5.2 +Libraries +Moreover, we open-source the following libraries, on which previous +models are based: +• archisound3, our library including trained models ready to use. +This repository doesn’t contain any modelling code, but acts as +a wrapper and documentation for our models hosted on Hug- +gingface 4. +• audio-diffusion-pytorch5 (ADP), the main library including +the proposed audio diffusion models. This library has both a-unet +and audio-encoders-pytorch as dependencies. At the time of +writing, this library has 550+ stars on GitHub, and has been +downloaded more than 50000 times on pip. +• a-unet6, a highly customizable library to build U-Net architec- +tures in any dimension, expansible with multiple blocks and +plugins. This library can be used for any type of grid data: 1D, +2D, 3D. +• audio-encoders-pytorch7 (AEP), a set of encoders and autoen- +coders for 1D data. +3 https://github.com/archinetai/archisound +4 https://huggingface.co/archinetai +5 https://github.com/archinetai/audio-diffusion-pytorch +6 https://github.com/archinetai/a-unet +7 https://github.com/archinetai/audio-encoders-pytorch + +1.6 structure of the thesis +5 +Some additional libraries we open-soruce that are not documented +in this thesis, but might nevertheless be interesting to the reader, in- +clude: cqt-pytorch8 for invertible CQT spectrograms using NSGT, +and bitcodes-pytorch9 a method for vector-quantization into binary +codes. +1.6 +structure of the thesis +In Chapter 2, we present the various audio representations and pro- +vide a set of tradeoffs that must be considered when selecting an ap- +propriate representation. In Chapter 3, we describe the general prin- +ciples of diffusion and then delve into the specific diffusion methods +that we have tested. In Chapter 4, we examine our custom architec- +tures, including the U-Net and autoencoder, and provide detailed de- +scriptions of each component and how they can be easily integrated +into our library. In Chapter 5, we propose a range of diffusion models +that combine the diffusion methods from Chapter 3 with our custom +architecture from Chapter 4. Finally, in Chapters 6 and 7, we discuss +potential future work and present our conclusions. +8 https://github.com/archinetai/cqt-pytorch +9 https://github.com/archinetai/bitcodes-pytorch + + +2 +A U D I O R E P R E S E N TAT I O N +In the following section, we will introduce the different types of au- +dio representation that we can choose from, and compare the differ- +ent tradeoffs. Before that, we’ll have a look at the different desirable +properties that should be considered. +2.1 +desirable properties +2.1.1 +Compressibility +We define compressibility as the approximate number of values per +second needed for high-quality audio compared to the original wave- +form, and how many can be easily removed without a significant loss +in fidelity, e.g. by applying a convolutional only autoencoder on the +representation. +2.1.1.1 +Perceptibility +Perceptibility implies how close is the representation to human hear- +ing, this part is important since if we are compressing a representa- +tion that carries a lot of information we are not able to perceive in +the first place we will lose a lot of useful capacity. More specifically, +humans hear sound in the range of frequency from 20Hz to 20kHz, +on a logarithmic scale, which means that the frequency resolution +decreases as we approach 20kHz. +2.1.2 +Decodability +Decodability refers to how simple and fast is to decode the given +representation back to the waveform domain that can be reproduced. +2.1.3 +Diffuseability +Diffusability is a set of desirable properties that are important in or- +der for a diffusion model to be applicable. In particular, (1) the values +should be approximately in the range [−1, 1], (2) the signal should ide- +ally have some inductive biases that can be exploited by the network +(primarily 1D or 2D convolutional blocks), (3) time-shift invarance if +we are doing inpainting or autoregressive generation, i.e. the repre- +sentation should look the same at different time steps for the same +7 + + +J6&' EgC COJJJU 16bI626Uf2 g 2JU9JI CJUK Ot fJ6 O11&JU9J MA6- +fuG UnupGI Ot csUuGje' E 1e fuG Unp6I Ot ednGucI6e gug F Je fuG +26Uf gnqiO JUfO g cOIb]GX-A9JnGq fGU2O1 Ot 2gb6 [C'L I] MG16 C Je +- t W (TT) - T- +roa(I : ) to f +dtiw blse TT Igndo lgi o q eigemi bne Is : gg +TIT2I.E.S +S'3 2LECLKOCVW2 +IgbIgJA OA6I fIJU6' JGUC6 f6 MJII fAbIC9JJA JSA6 JS1&61 g!1616UC62 +COU2fLICf f9U JOM 116d6UC162: H& 16d6UC162 f6Ug fO Ag1 JO16 +FS JO22 InUCfIOU OU MSAG1OLUe' JI& 1ed6UCI6 MJJI p6 SIgGL fO 16- +62f coaiq61 b6icebfp!f: IU igcf It M6 gbbj g 2fgqgig Fi 1 +126 qiL6CI gUg fgf MgA61OLI gL6 g pg2IC 6b1626SfOU MIC +26dnGUC62 fG qi2gqASUg62 SI6 FuSt JOu& 26dnGUC62 916 2JOM fO qlt- +ode lof ldeauib lie9 'fi bn bvlovi i iboogb o dt i +Ot 2f6160 gq1o (+8000xs): M6 p261A6g fgf Mif g 2f9ugig ID coU- + leix e i Exxa e gi +Jsi&6: e g &oog cobg2o' fG Jnp6 Ot Agj6e Ot g 2fguggg +Vi9 9d Iliw a9 on oibs ilsp-gid o aimabro9 +2OL MII COUf9IU I 26COUg Ot gHqIO' It M6 MgUf fO &UG1gf6 JJfbJ6 +It 1 = 48000 Ug M6 9I6 LGbI626UIU& ghqiO 9f 48 KHE' FUGU FJ6 f6U- +ghqio) gug 1 fe nupel ot bojure eeg fo iebi62euf fue gnqio cg. +o = l dm [a +S'S MVAEEOBW +2onUg' (+) FJ6 ASJnG2 2onJg Of gA6 fOO JUgua AgJ62 IU f6 fIU6 q!- +VNDIO ELKEEMIVIIOM +8 +M + i e +qICf 6 bg26 OI q!L6CIA fJG MSAGULOU LO fJG Jg&UIfng6 JUIOL- +9Jfo&6f61 gUg fo fIgIU gU ggg!foUSJ JUog6J (c9JJ6g Aocog6L) fo b16- +bjoif qo of 6saa gbbja: coou bigcjc6 Je fo qiecsig fue buga6 +Ug GUC6 fG IUCA6 pIg262 Ot 2bgFI9I JOC9JIfA fgf COUAOJIfOU 6X- + ol t ig liti T . +A61 2gI JO22 I dJf OU f6 of6 gug bg26 e A61 gig fO +Jg&uinq6 ebeco&ig cgu p6 6gea cobi6e26q b fo 3sx MIfy +Fanl6 S: Wsafqe abecoaisu gug bsee ot g euaj6 cusuel 2LEI ge- +(x(tr)) +s 9rugi mi rwore 2 +gLCfS +α(X(t『)) +guq bjg26 96 q6uU6q g2 waa(x(t r)) := lx(t r)l- 9Uq bμe(x(t r)) : +fO 69J 9Ug JUg&JUga bgLfe' gUg J6UC6 pgCK fo MgA6uLo' Wg&uIfng6 +IUfo JgIfn6 gU bg26' g 16bi626UfgfOU fSf c9U p6 JA66g pgK +.bgi9i li9 +GCIGJA 2I& f6 S2f ONG IOL (EE) C 9J2O G +FJG 2J&USJ MIF I KGLUGJe Ot JGu&t M LG 2LEL CSU p6 CObrIIGq +lovo i d blow o +2IUG MAGe tOI f6 J&USIA OUG SKI& FG GSJIg&JUSI 2LEL +b16g61!U6g Jf612 cObo26g Ot Co2JU6 MSA62 1OL fJ6 L69J bgLf' 9Ug +92 M: JUfG1G2fIU&X MG C9U COU2Ig6I fJ6 2LEL 92 ID-COUAOJNFIOU MIfJ +MJG16 M J2 g MJUqoM tnUCfIOU Ot 2ugb6 M fugf 12 26 fo J2ojfG fuG +(s) +2l上l[x)(tr) =X(r) +(1) +X-xI2 16b1626Uf2 f6' LJ6 2LEL 9f g bo1Uf (t ) 12 q66g g2: +1O1 M616 F6 X-gx12 16b1626Uf2 JIU691JA 2c9J6g 116dn6UC162' gUq f6 +S3 LECLEOCKV2 +Q +IUAGL2IOU 9Ug bJg26 IGCOU2fLNICfIOU JU OUG &o: +l od oo o o d +L L6CoG1 f6 J!6J 2c9J6g Jg&UIfg6 bCfLo&ige' obIIIgOU +.foitofi o ol oi l il t t d lditi 'i +CJO1C6 JU gqiO gbbJIcgfIOU2' LG qi2gqAgUtg&6 Ot f6 J6J 2c9J6 12 fgf +C6bfIOU fJSU g JU69IJA 2C9J6g 2b6CfIo&1gJ' JSKIU& If 9 AG1A COJUJUOU +-1 iv i oiti H +- Isos i st rts boa o oi bvio t o baed +i gle rltireol Isioe eidt gle Ie gt ( + orgo d +E&6 3: WE-2cJG 2becftog" gfq 2c9JGg Mi roa() to AepI- +IHMS.E.S +b 9t tig lib 9i 9 bilo q +SUe geI&I2 1edUCIG2 OU g Jo&gIfIC 2C9J6 MIC I2 2fI Of +f6 Jgig fO cOUub1622 l1O f6 6g2^ fo cOb1622 bgLf2: +2626' COUA6LfI& fJ6 f19U21OLJ f Jg&UIfg6 gUg bg26 g126Uf9U&J62 +21UC6 g JOf Ot 19UqOUU622 12 2fJI b1626Uf I f6 q6f9JJ2 Ot fJ6 SLig: IU g +IGbLG2GUTSfIOU' +VNDIO KELKEEMIVLIOM +103 +E X I S T I N G D I F F U S I O N M E T H O D S +Diffusion models, first proposed in [3, 17] are most commonly imple- +mented with U-Net [7, 13] that is repeatedly called during inference +for each denoising step. Since the same network is called multiple +times during sampling, the weights are shared, making it a recur- +rent model. Since the data can be progressively corrupted from a +clean to a fully covered state, we can use this trick to jump to any +intermediate noise level and denoise a single step, backpropagating +only once during training. From the perspective of recurrent models, +(forward) diffusion allows us to recover the memory at an intermedi- +ate state (which we can see as the corrupted datapoint) without the +need to backpropagate the entire chain. This is a useful technique +for efficiently generating intermediate states, and has the advantage +that it can be highly parallelized during training. Compared to recur- +rent models, the memory state is predefined by the (noise) corruption +process and not fully learned. Diffusion exploits very similar princi- +ples as autoregressive transformer models [19], namely a highly par- +allelizeable training process and repeated network call with weight- +sharing during sampling. Compared to other generative models like +GANs, diffusion models are easier to train and don’t suffer from in- +stability problems arsing from having to coordinate a generator and +discriminator. +Diffusion models are a category of powerful generative models first +introduced in [17] (2015), and later popularized in [3] (2020), thanks +to the impressive results obtained in image generation on CIFAR10. +In this section, we will examine different diffusion methods. First, +the seminal DDPM [3] method, which involves training the diffu- +sion process with a finite number of denoising steps. Following that, +DDIM [18] introduces a few changes that generalize DDPM to an arbi- +trary number of steps. Then we will introduce V-diffusion from [16], +a continuous diffusion method that aims to improve the mixing of +the signal-to-noise ratio from DDIM. For DDPM and V-diffusion, we +will highlight the most important operations, namely: (1) noising the +original datapoint (signal) to a desired noise, (2) denoising a single +step with the use of our (trained) network, (3) the training objective +used, and (4) a sampling technique that repeatedly applies (2). +3.1 +ddpm-diffusion +DDPM [3] is one of the seminal works in diffusion models. The method +starts by assuming that xxx(0) +0 , . . . ,xxx(D) +0 +is a dataset of D i.i.d. points +11 + + +MJGIG ef M(O' I) +(2) +Xf +qGAIgfOU M6 9U 6g2JJA 2gbJ6 xf f6 JO12 A6121OU Ot ×o~ g2: +M616 f := II=( -f) q6b6Uq2 OU 9JI f 26J6Cf6q IU d(×f I×f-): +×|hf :=^fx0f;=(-f)I +d(xf |x0) := +(4) +:es bgtslumrof roitudirtaib +fG L6bgLGUGfLISSfIOU fLICK If C9U p6 2OMU fSf FJI2 J2 9J2O g JOLIU9J +JGaGj f Lia bioceqrL6 1e cJJeg f toMgig qilejOu bioc: eue +Msa o qiLCIa Inb o UO126 JGG o' (om CJ6S stboTt) fO UO1e6 +B 2I f6 b16e g2nbOUa' M6 c q61I6 d(xf I xo)~ I'6' g +3'I'I Voe& (0→ f) +tioq uoivi t +C9JJ6 00AONC6 2C6NJ6' MIC COUFOI f6 IUCL6926 IU JO126 JGA6I LOU +b6Ug6Uf OU fJ6 b16AOn2 bo1Uf 9Ug 2OUU6 ab6tbg19J61612 ↓: : : +fO 29UbJ6 g JOLU9J gI2fLJpHIfIOU MIfJ fJ6 69U 9Ug COUASLISUC6 g6- +F6 JO126 J6A6J O1 OnIL qggboIUf ×f-1 p OU6 2f6b fO J6A6J f M6.JI A6 +M(×f I hrf = ^I -fxf-」f = fI): IU MOLq2' I M6 M9Uf fO JUCI6926 +: (-x I x)p edt bns (T 1o mumixem o mor Igvl 9io f +2gbj6g g UKoM g2Jpfou d(xo)(f6 2p2cbf Iugicf62 +EI&M6 2: D!2JOU JU1616UC6' +oibuA +EI&n6 : D!L2IOU fIUIU&. +oibuA +IS +EXIIИ DIEEIOИ WEHOD3.1 ddpm-diffusion +13 +3.1.2 +Denoising (t − 1 ← t) +The reverse process distribution q(xxxt−1 | xxxt) is also a normal distri- +bution. However, it cannot be directly estimated as it depends on the +entire dataset. Instead, we train a neural network with parameters θ +as an approximation: +pθ(xxxt−1 | xxxt) ..= N (xxxt−1 | µµµθ(xxxt), Σθ(xxxt)) +(6) +If our model is trained properly, similarly to the forward process, +we will be able to carry out a single denoising step by sampling the +normal distribution using the learned mean and variance. +3.1.3 +Training Objective +To train our model, we need a handle on the true mean and covari- +ance of the reverse process q(xxxt−1 | xxxt). As we have seen before, this +is not directly tractable, however, if we include additional informa- +tion about either xxx0 (the true data point), or ǫǫǫt (the noise used to +get xxxt from xxx0 in the forward process) we can compute a different +but tractable auxiliary distribution. In the case where xxx0 is given, the +distribution is: +q(xxxt−1 | xxxt,xxx0) = N +� +xxxt−1 | ˜µµµ(xxxt,xxx0), ˜Σ(xxxt,xxx0) +� +(7) +With mean ˜µµµ(xxxt,xxx0) ..= +√1−βt(1− ¯βt−1) +1− ¯βt +xxxt + +√ ¯βt−1βt +1− ¯βt +xxx0 and covariance +˜Σ(xxxt,xxx0) ..= 1− ¯βt−1 +1− ¯βt βtI, as shown in [3]. To train our network, we will +then minimize the divergence between this tractable distribution and +the distribution estimated with our model: +Lt ..= DKL [q(xxxt−1 | xxxt,xxx0) || pθ(xxxt−1 | xxxt)] +(8) += Exxx0 +� +1 +2 ∥Σθ(xxxt)∥2 +2 +∥˜µµµ(xxxt,xxx0) −µµµθ(xxxt)∥2 +2 +� +(9) +Which amounts to a simple L2 loss between the auxiliary mean, and +the true mean estimated by the model, with some extra scaling factor +that is dependent on the covariance, in [3] the covariance is fixed to +Σθ(xxxt) = βtI. A more rigorous argument using variational inference +can be applied to show that this is a lower bound of the negative +log-likelihood of the data distribution. More concretely, our model +fθ will output an estimated mean given the noisy datapoint and the +noise level as input: µµµθ(xxxt) = fθ(xxxt, t), which we can then use to +sample the next xxxt−1 from a normal distribution. +If instead we assume ǫǫǫt is given, we can follow a similar procedure +to get the loss Lt: +Lt ..= DKL [q(xxxt−1 | xxxt,ǫǫǫt) || pθ(xxxt−1 | xxxt)] +(10) += E +� +β2 +t +2βt(1 − ¯βt) ∥Σθ(xxxt)∥2 +2 +∥ǫǫǫt − ǫǫǫθ(xxxt)∥2 +2 +� +(11) + +14 +existing diffusion methods +In this case our model will estimate the noise instead of the mean of +the datapoint xxxt, i.e. ǫǫǫθ(xxxt) = fθ(xxxt, t), however we can still recover +the mean as: ˜µµµ = +1 +√1−βt +� +xxxt − +βt +√ +1− ¯βtǫǫǫt +� +. Empirically, it has been +shown in [3] that the objective can be simplified further by ignoring +the scaling factor: +Lt = Eǫǫǫt +� +∥ǫǫǫt −ǫǫǫθ(xxxt)∥2 +2 +� +(12) +The final objective function to train the model is then computed with +random noise levels t sampled from a uniform distribution. +L ..= Et∼[1,T][Lt] +(13) +3.1.4 +Sampling +Sampling in DDPM is very straightforward, we start with xxxT ∼ N(0, I) +and recursively call the model T times using at each step the esti- +mated means µµµθ(xxxt) (or noises ǫǫǫθ(xxxt)) of the T normal distributions +to get each subsequent sample: xxxT−1 ∼ pθ(· | xxxT), . . . , xxx1 ∼ pθ(· | xxx2) , +xxx0 ∼ pθ(· | xxx1) where xxx0 will be our generated output data point. Note +that this is a stochastic sampling process, since at each step additional +noise is added from sampling the normal distribution. +3.1.5 +Limitations +This method requires on the order of hundreds of sampling steps to +get good quality samples. Compared to more modern methods that +follow, the number of steps T is a fixed hyperparameter both during +training and sampling, limiting its flexibility. +3.2 +ddim +DDIM [18], is another seminal work for diffusion models. By intro- +ducing a few changes to DDPM, the number of sampling steps used +during inference can be dynamically changed while maintaining the +same training procedure. This allows to sample between x10 and x100 +faster, and to trade speed for quality at will. A direct implication of +having a variable number of steps during sampling is that we can +train with very large T, or even infinitely large T, leading to a contin- +uous diffusion process. The idea of DDIM is that if we know both xxx0 +and xxxt, we can use q(xxxt−1 | xxxt,xxx0) to sample xxxt−1. There are two pos- +sibilities, either train our network to predict directly (i.e. no sampling) +xxx0, or train our network to predict the noise ǫǫǫt (as done in DDPM) +that combined with xxxt can be used to infer xxx0. A key observation is +that using this alternative method doesn’t change the training objec- +tive, as the objective only depends on the backward diffusion process. + +3.3 v-diffusion +15 +Importantly, we can use a different forward process to recover the +next step, for example use q(xxxt−2 | xxxt,xxx0) to jump directly to xxxt−2 +instead of xxxt−1, essentially skipping a sampling step and speeding +up the process. If we make the time-step continuous, we can jump +to any intermediate step in (0, t]. Even more interestingly, this con- +tinuous sampling procedure can be viewed from the lens of ordinary +differential equations, allowing us to use a variety of existing sam- +plers, like the basic Euler methods or more advanced ODE samplers. +3.3 +v-diffusion +V-diffusion, or v-objective diffusion [16], is a diffusion method in- +spired from DDIM, trained with a continuous value σt ∈ [0, 1]. This +is the method we found to work best on a variety of audio tasks. In +v-diffusion, if σt = 0 then xxxσt represents a data point xxx from the data +distribution and if σt = 1, it will be Gaussian noise ǫǫǫ. In DDIM we +can choose to either use the model to predict xxx0, or use it to predict +ǫǫǫt, in this case however, a velocity value vvvσt is estimated from which +both xxx0 and ǫǫǫσt can be inferred. +3.3.1 +Noising (0 → σt) +�� +��� +�� +��� +��� +��� +� +Figure 6: V-Diffusion semicircle +The noising process uses a weighting on a circle: +xxxσt = ασtxxx0 + βσtǫǫǫ +(14) +Where ασt +..= cos(φt), and βσt +..= sin(φt), where φt ..= π +2 σt. When +σt = 0, then xxxσt = xxx0, i.e. no noise is added, if instead σt = 1, +then xxxσt = xxx1 = ǫǫǫ, i.e. only noise ǫǫǫ ∼ N(0, I). Intuitively, using the +weighting on a circle makes sure that as we move σt linearly from +0 to 1 the noising process slowly removes information from xxx0. By +sampling a random σt ∈ [0, 1], we are more likely to pick a value that +resembles xxx0 instead of pure noise ǫǫǫ, meaning that the model will +more often see data with smaller amount of noise. Empirically, this +has been shown to be beneficial over standard DDIM diffusion. + +16 +existing diffusion methods +3.3.2 +Denoising (σt−1 ← σt) +To denoise a from noise level σt to noise level σt−1, we can use our +velocity-estimating model ˆvvvσt = fθ(xxxσt, σt), note that the velocity +here is defined as the derivative vvvσt +..= ∂xxxσt +σt , i.e. how much does the +datapoint change with a small change in the noise level σt (see circle +figure). As mentioned before, using an estimate of vvvt, we can obtain +both xxx0 and ǫǫǫt, which in turn can be used to estimate xxxσt−1 in DDIM +style: +ˆvvvσt = fθ(xxxσt, σt) +(15) +ˆxxx0 = ασtxxxσt − βσt ˆvvvσt +(16) +ˆǫǫǫσt = βσtxxxσt + ασt ˆvvvσt +(17) +ˆxxxσt−1 = ασt−1 ˆxxx0 + βσt−1 ˆǫǫǫt +(18) +In the previous equations, the first 3 lines show how to recover +the clean datapoint xxx0 and the noise ǫǫǫt from vvvt, and the last line, +remixes the noise with the initial datapoint to get xxxσt−1. The previous +equations can be formally obtained by using trigonometric properties +on the definition of velocity (as shown in the appendix of [16]), and +intuitively understood by rearranging vectors on the semicircle. +3.3.3 +Training Objective +By taking the derivative of the noising formulation, we can compute +the true velocity vvvσt = ασtǫǫǫ − βσtxxxσt. The training objective is then: +L = Et∼[0,1],σt +� +∥ˆvvvσt − vvvσt∥2 +2 +� +(19) += Et∼[0,1],σt +� +∥fθ(xxxσt, σt) − ασtǫǫǫ − βσtxxxσt∥2 +2 +� +(20) +(21) +Where ǫǫǫ ∼ N(0, I) and xxxσt is computed according to the noising for- +mulation. +3.3.4 +Sampling (σ0 = 0 ← · · · ← σt−1 ← σt = 1) +To obtain a new data point ˆxxx0, some starting random noise ǫǫǫ ∼ N(0, I) +is sampled, and the denoising procedure previously demonstrated is +iteratively applied over a linear sigma-schedule. + + +EI&LG : -V6f PJOCK +COL6 CObOUGU Ot fG CIf6CL6 (JgfGg U EI&L6 ): +IU O1gG fo pJJg g &6UGIC n-V6f PJOcK If 12 JGC622ga fO I6UF!ta fG +I'-Vf Brock +aei d bqe +IU& fO 9Jf61 f6 pg2IC f6UbJS6 2 fgf 6xb6LIUU6UfgfIOU gUg If61gfIOJ +Agiefa ot n-efe: Lue Bog Ot g-uef Je fo bioe fue af Jeaej Ot + bliud ot old gibliud gldsosd o xodloot 2gbulori sdt isrd +-il en- bivoq w eoiev gib iw Isb ot ab l +t o od t g t ew Is [ l oold oit +JUGU2' JUCJq!U&: UGM 2KI COUUGCIOU2' COUAOJFIOU9J PJOCK2' 9ffGU- +GLU AGL2IOU2 fSf IUCOIbOLgf6 UMIUGLON2 GUJSUC6UGUf2 9Uq IIbLOA6- +-bom rit it v bvlov ido -U T +b bs oe o id ibi +JIg&6e' f IU OnI C26 M6 MJJI gggbf If 1O ID COUAOJOU2 U O1q61 +gICIf6CfI6 26g SD COUAOfIOU2 fO 6xbJOIf fG 2bgfIg 2LICfL6 O +FO JG9LU g bI62GLAG !UG qGF9IJ2 gf JJFbJG LG2OJIfIOU2' LG OLI&IUgJ +J6UrgfIOU' -V6fe cOU21ef Ot 9U 6UCoq61 U6fMOLK gUq g q6Coq1 J6f- +fAb6 O1 COUAOJIFIOU9J 9ICIf6CfL6 OLI&IU9JJA g6A6JOb6g 1OL Ug86 28- + [l V-U fiw bgmlqi lnommo lbom oai +4'I'1Back&tong ot -Vsf +I O g-U IBV +VBCHILECLNBE218 +architectures +These include a downsampling block that simultaneously reduces +the resolution and number of channels of the input (typically imple- +mented with a single convolution), a stack of customizable processing +items (see subsection 4.1.3 for details), an inner block that may contain +another instance of the block recursively, a second stack of processing +items that typically mirrors the first stack, an upsampling block that re- +verses the effects of the downsampling (typically implemented with a +single transposed convolution), and a skip block that merges the skip +connection using some operation. +Furthermore, we select 3 possible types of conditioning contexts +that can be injected in the processing items, namely: a feature-vector +based conditioning +typically used with diffusion to provide the +noise level, an embedding based conditioning +that injects multiple +embedding vectors as context, typically used for text/CLIP-embedding +based conditioning, and lastly a channel-based conditioning +used +to inject entire stacks of channels in the block. Depending on the task, +we might a different combination of conditioning methods. +All described characteristics can be defined and customized using +the following block: +from a_unet.apex import Block +block = Block( +dim=1, +in_channels=2, +channels=4, +factor=2, +items=[...], +# Optional +items_up=[...], +downsample_t=..., +upsample_t=..., +skipt_t=..., +inner_block=... +) +This is a building block for a U-Net, where we can customize the num- +ber of input/ouput channels (in_channels), the number of channels +post-downsampling, and the downsampling factor. The items list +will contain the different items that will be duplicated after the inner +block. Optionally, we can change the type of skip connection, down- +sampling and upsampling operations. The inner_block can be an- +other instance of Block to recursively nest multiple blocks. +Since a U-Net is usually composed of multiple nested blocks where +the number of in_channles of the inner block must match the num- +ber of channels of the outer block, we provide XUnet as a glue class, +and XBlock as a template class for Block to make this process more +convenient and automated: +from a_unet.apex import XUNet, XBlock +unet = XUNet( + + +procK = XBrocK(' +( +[ + []= +[ . A .M] = 6 + +L66qLolmglqIfw 2 " +CL022Vf+6U10UIf6W 92 C" + + +XBTOC +) oq q.n_ mo +:wolof oJax ooja b lia d +LJG 6XSJUbJ6 COUUPIUSFIOU ILOUU EI&NIL6 8' OI 9UA OFUGI COUUPIUSFIOU' +eti t-U :8 gi +.O +Ot 6&1OU A6CfOL2' gU g IJ6cfIGW (I) tO1 U)6CfI& g 26f Ot 1OAIg6g +Gp6qq!& AGcfO12 g 66qLoLMg qIGw () 1O1 WIL JIk6 b1Oc622I +(C) tOL CIO2e-SfGUOU pef6GU I6&JOU AGCfOLe gg g bLOAIgeg 2f O1 +26Jt-ff6UFO LOC622I& IIf 6fM66U I681OU A6CO12" g C L22fGU+TOIfG +1G1GUf CSUU6Je IOAI6g 1G9fIL6 A6CO1 g VfFGufTOuIfGW (v) tOL +CG22u& UIf' g WoqnrgfiouIfew (w) fo gbbja Jogngfiou Ot fG g!t +- Ioilovo () m bivo 9w xd t o +3Ife +.lb gbivo ot oJax +1OLMgiq6g fo 9I PJOck2' Lgig6f612 c9U 9J2o p6 b1OAig6q fo g 2b6cIC +2KIb COUUGCFIOU 2KTb-f IU fG XnM6f' fUSf MJII IU fHU gnfOUSIC9JA +2Kb-f= +1 +XBrock(cuguu6r2=je' 9cfol=5' 1f6w2=[-.-1)* +([...]= = 8=J) +([...=2 .= .=) +p『oc2=[ +Tu-cge2=* +.I=mib +YAI - I.20 +architectures +Additional customized items can be easily included without alter- +ing the template code, making experimentation very simple. +4.1.4 +Plugins +Plugins are used to augment the U-Net model with additional func- +tionality. It’s often the case that we have to wrap the U-Net model +with some pre- and post-transformation, or that we have to alter or +augment the inputs provided to the U-Net. In order to maintain a +modular structure, plugins can be used to directly modify the U-Net +type without having to change the model code. +4.1.4.1 +Time Conditioning Plugin +The time conditioning plugin is used to convert a floating point value +to a conditioning feature vector +, this is useful during diffusion to +provide the current noise level, or timestep. To obtain the time feature +vector from a floating point value, a learned weight is multiplied by +the time information to get a frequency vector that is then processed +using a pair of sin and cos to get Fourier features. The Fourier features +are then transformed to a learned feature vector of the desired size by +a stack of MLPs. This function can be easily added to the base U-Net +as: +UNetWithTime = TimeConditioningPlugin(UNet) +This extends the U-Net with an additional time parameter, which can +be one or more floating point values of each batch element. +4.1.4.2 +Embedding Classifier Free Guidance Plugin +Classifier free guidance is a method proposed in [4]. We provide +a ClassifierFreeGuidancePlugin used to increase the conditioning +strength of the provided embedding +. During training, the embed- +ding is masked with a fixed (learned) embedding with a small prob- +ability in order to ensure that the network is able to generate realistic +output without access to any conditioning information. During infer- +ence, the network is called twice, once with the conditioning embed- +ding to get ˆye, and once with the fixed embedding used as mask to +get ˆym. A scaling factor embedding_scale (λ) is then used to guide +the network to produce an output that gives more or less importance +to the conditioning embedding compared to the masked embedding: +ˆy = ˆym + (ˆym − ˆye) · λ +(22) +This plugin can be easily used by augmenting the U-Net as: +UNetCFG = ClassifierFreeGuidancePlugin( +net_t=UNet, +embedding_max_length=64 +) + +4.2 our audio-encoders-pytorch library +21 +Later the new UNetCFG model can be called with the additional param- +eter embedding_mask_proba to probabilistically mask a batch of em- +beddings during training (e.g. a value of 0.1 will mask 10% of the em- +beddings with a fixed embedding of length embedding_max_length), +or with an embedding_scale parameter during inference, to call the +U-Net twice with and without masking, and apply the scaling factor. +In both cases, the embedding parameter must be provided as well. +4.1.4.3 +Text Conditioning +The text conditioning plugin augments the U-Net embedding condi- +tioning information +with a learned text embedding from a frozen +pretrained language model. By default, the T5-base transformer model +from [10] is used if no embedder is provided. +UNetWithText = TextConditioningPlugin( +net_t=UNet, +embedder=T5Embedder() +) +This adds an additional text field to the U-Net forward method that +automatically extends the embedding with text embeddings from a +pretrained language model. +4.2 +our audio-encoders-pytorch library +The autoencoder component has a similar structure to the U-Net, +with a few changes: (1) to make it an autoencoder no skip connections +will be used, (2) no attention blocks will be used to make it generic to +any input sequence length (3) no conditioning blocks will be applied. +We open-source the autoencoder library audio-encoders-pytorch (AEP) +as a separate library from a-unet. AEP includes both encoders and +decoders, and a set of bottlenecks that can be used to normalize +the latent space, namely (1) a variational bottleneck in the style of +VAEs [5], (2) a simple tanh bottleneck, (3) a quantizer bottleneck, sim- +ilar to the one proposed by VQ-VAEs [9]. Furthermore, we propose +two encoders that encode spectrograms channelwise into a 1D latent, +namely a ME1d (magnitude spectrogram only encoder), or MelE1d +(mel spectrogram encoder), both compatible with the different bottle- +necks. + + +5 +M O D E L S +5.1 +overview +In this section we describe various diffusion models and their under- +lying structures. We investigate various diffusion models that serve +different purposes and functions, including upsampling and autoen- +coding. Although these models may have distinct applications, they +are ultimately utilized with the goal of audio generation. All of the +different models are implemented using variations and combinations +of the previously described achitectures (i.e. U-Nets and auto-encoders). +The models proposed are implemented in the audio-diffusion-pytorch +(ADP) library. +5.2 +diffusion unconditional generator +The diffusion generator is the simplest model we propose to syn- +thetize unconditional audio and is implemented with a single 1D +U-Net. +5.2.1 +Motivation +The unconditional diffusion model is a good starting point to test the +overall quality of the particular architecture and diffusion method +used. It doesn’t include any type of conditioning, making the dataset +and training procedure very simple, and at the same time can give a +good idea of the generation quality. +5.2.2 +Method +The diffusion generator takes a raw high-quality stereo audio source +as input from the datasets, that is then corrupted to a random noise +level based on the chosen diffusion method. Using a U-Net, the gen- +erator then predicts the output, which may be the denoised input or +a value that is used to compute the denoised input, depending on the +type of diffusion method employed. The noise level (usually called +time or σ) is provided as conditioning to the network thorugh as an +encoded feature vector +to tell the network how much noise must +be removed from the provided input. For the diffusion generator nei- +ther the embedding conditioning +nor the cross attention blocks are +used. +23 + + +2bJCif 2b66g' gUg 2gbj6 dgJf^ +2oupj gef fo 2gbj6 l1ou' Mif gionug 2o 2gbj!u& 2f6be biognc62 +rt d of oiaib-v bof W Igvl ebuol i iev tt le +6 Jong2-o1J6g' M6 &t &og 6f2 & gog 1ox +Ji&y gaUSIC-Lgu&e' 6AGU MIf ggAgUC6 2gbJIu& Gfoge: It biob + oe e a-df g +21b2 Mf g21C 2JbJ61 qnL& IU1616UC6 fo &6U61f6 16920U9pJ6 +qitteioU efoqe: Onf ot f6 pox' f6 og6J goueftgf6g &oog I6- +Ms Gagjrgteg fue beltouguc ot fue bioboeeg Joqel Mify q!leieUf +E!&G IO: D!!21OU JUOq6I IUIGIGUC6 +WO126 +.fioitudirtib stsb gt +MIf AgJ& O126 J6A6J2 fO &6U61g16 g U6M bJg2JPJ6 29bJ6 1OU +bglovni vlevitsigti ai t9V-U grlt brs bglqmse i glqmse oibus gri +DLJ& IU1616UC6' g 9UqOUU A6CfO1 MIfJ fJ6 29JU6 2Jgb6 g2 9 fL9IU- +EI&n6 O: D!2OU JqGI fIgUI& +oibuA +54 +ИODE25.2 diffusion unconditional generator +25 +5.2.4 +Transforms +Independently of the diffusion method used, this model without any +addition struggles to generate more than a few second of sound. If +the raw waveform is provided to the network the initial convolutional +blocks of the U-Net will have to process huge samples, e.g. even a +single second of high-quality 48kHz audio requires 48000 values to be +processed by the first convolutional block. This can be a speed issue +if the audio is not downsampled quickly enough in the U-Net, as the +inefficiency will compound over the number of sampling steps of the +diffusion process. In addition to that, if attention blocks are used, we +will have to downsample enough to make sure that the number of +timesteps to be in the range of 1024 or 2048 values. Exceeding that +will slow down self-attention drastically due to the n2 computational +complexity for sequence length n. Hence, a lot of downsampling is +required with long audio samples if we want to satisfy these criteria. +To mitigate the challenges mentioned earlier, we investigate the use +of various methods and audio transforms to convert the raw audio +source into a representation that reduces the temporal dimension in +exchange for additional channels. +5.2.4.1 +Patching +The first transform is patching, proposed originally for the image do- +main in [6]. We adapt patching to the 1D domain, where the idea is +to group sequential time steps into chunks, that will then be trans- +posed to channels. Given a patch size p, the length t is reduced by +t +p where the number of channels increases to c · p, at the end of the +U-Net processing the channels are unchunked back to the full length. +We found patching to give drastic speedups, almost at a factor of p +for p = 2, 4, 8, 16, 32, ..., allowing to train models with much longer +audio sources. However, even if the audio generation quality almost +matches the non-patched version, audible aliasing is present with all +factors. This drawback is likely due to the repeated unchunking pro- +cess, which will have a repeating structure, creating a high-frequency +sine wave in the signal. Furthermore, we found that patching with +p ⩾ 64 started to degrade quality, probably due to some capacity +constraint in the channel dimension. We can think of patching as a +deterministic auto-encoding process, with a downsampling factor of +p. +5.2.4.2 +STFT +The second transform is the previously introduced STFT. We use the +common setting of 1024 num fft and window length with 256 hop +size. By wrapping the U-Net with STFT and iSTFT the transform +downsamples the length of the audio by 1024 while equally increas- + +26 +models +ing the channel count. STFT is implemented with the Fast-Fourier +Transform, hence it’s efficient to apply. No normalization is required +on the spectrogram, since the diffusion loss will still be applied on the +reconstructed wave. This method gives great speedups thanks to the +large downsampling, but similarly to patching suffers from degrada- +tion in quality compared to the raw wave representation. Perceptible +noise is present in the generations both when transforming to magni- +tude+phase, or when using real+complex. +5.2.4.3 +Learned Transform +Lastly, we propose a learned transformation with a single convolu- +tional and transposed-convolutional block at the start and respec- +tively end of the U-Net. The transform consists in using a large kernel +size and stride of 64. This will down-sample the original signal in a +single step, trading off small amounts of speed from the determinis- +tic patching or FFT implemented STFT. However, since it’s a convo- +lutional method, we can choose the number of channels and increase +it to a larger value (e.g. 128, double the kernel size and stride) than +used during patching, giving more capacity to be resilient to artifacts. +At the same time, we can use ideas from STFT and have large over- +lapping windows with learned kernels instead of fixed sine/cosine +waves (e.g. kernel size 128, stride 64, 64 channels, with padding to +preserve dimension), which can help to overcome aliasing. We found +this to be the best quality/speed tradeoff method of pre-transforming +audio. +5.2.5 +Usage +The diffusion generation model proposed is constructed by first adding +the LTPlugin to the default U-Net UNetV0. This plugin wraps the U- +Net with the previously described learned transform. After that, we +have to provide the U-Net type to the DiffusionModel class which is +responsible for constructing the U-Net, the diffusion training method +(by default V-Diffusion), and the diffusion sampler (by default DDIM). +from audio_diffusion_pytorch import DiffusionModel, UNetV0, +LTPlugin, VDiffusion, VSampler +UNet = LTPlugin( +UNetV0, num_filters=128, window_length=64, stride=64 +) +model = DiffusionModel( +net_t=UNet, +in_channels=channels, +channels=[256, 256, 512, 512, 1024, 1024], +factors=[1, 2, 2, 2, 2, 2], +items=[2, 2, 2, 2, 4, 4], + +5.3 text-conditional diffusion +27 +attentions=[0, 0, 0, 0, 1, 1], +attention_features=64, +attention_heads=12, +diffusion_t=VDiffusion, +sampler_t=VSampler +) +This model can be easily used to get the diffusion loss for train- +ing (which automatically applies the entire diffusion process) or to +sample a new element provided the starting noise. +# Training +x = torch.randn(1, 2, 2**21) # [batch, channels, length] +loss = model(x) +# Sampling +noise = torch.randn(1, 2, 2**21) +sample = model.sample(noise=x, num_steps=50) +5.2.6 +Evaluation +We found that it’s important for quality to have a single non-downsampled +block at the start to process the transformed audio at full resolution. +Furthermore, attention blocks are crucial for temporal consistency of +the generated audio, but can only be applied after the original wave- +form is down sampled to around 1024-2048 length. For example, if +the original audio has length 219 (i.e. ∼11s at 48kHz), we downsam- +ple by 64 = 26 in the learned transform, and by 23 in the 4 blocks +before the first attention block, hence the context length of the first +attention blocks will be in the desired range of 210 = 1024. +This model can generate high quality audio over tens of seconds, +possibly more depending on the speed requirements. In general, a +larger set of initial convolutional/resnet blocks (closer to the wave- +form) will result in better audio quality, at the cost of generation +speed. +We found that the architecture is able to generalize to longer sam- +ples than it was trained on, if attention blocks are used. The samples +maintain good long-context awareness even when doubling or more +the training length. Note that this increases the attention context size +and hence needs to be considered for before training. +5.3 +text-conditional diffusion +5.3.1 +Motivation +We used text as a mean of conditioning for several reasons. In Imagen +[15] it has been shown that pretrained and frozen language models +can be successfully applied to condition the diffusion process to gen- +erate images matching a textual description, and that by increasing + + +Clo22-9ff6uf1ou2=[1' {' {' {' {'{] +o = ( +:uds Isoitibbs giwollo +fOUIu& MIfJ L2 gUg CEC c9U p6 692JJA 9qq6q fo f6 Joq6I MIf fJ6 +-ibo x9T grribb9dm9 b9r169l b9xil s 1o 1ovs mi ,29mrit 9dt o oo1 +1166 niggUc6 [+]: DLIu& fLgUI' f6 f6xf Gp6qgu& 12 qiobb6g +IO JUCLG926 f6 2LG&ty Ot f6 fGxf COUqfOUJu& M6 gbA cJs22IGL- +DibuA +nibbedma +Ewpeqqtua +x9T + +JI2f MIfJ 2bgC62 9Uq fJ6 OfJ61 2o0 Ot fJ6 fIUU62 M6 26 COJUIg2 fO +t it .1.0 ilidd +1opn2f' M6 2nt6 f6 J!2f ot J6fggfg gug qiob 6gc 6j66Uf Mif g +JUOUJA 2FSLf (I ot ) OL GUg (V ot ): LO JSKG FJG COUqIFIOUJU& JUO16 +fLgIUGg OU gUg Jom JSU fofSJ CJHUK2 fJG 2ou& 12 JSgG Ot (6'&: I ot +) +d w +2JC &GUGISfIOU' MG fISJU OU JGTSgSg' JUCJg!U& FJG fIfJG Ot fUG 2OU& +-m ron .Igbom oiauib gt noitibro ot bgau zi oidw gribbgdm9 +L2 fL9U21OLJ6 6UCOq61 f 6UCO6 f6 f6Xf9J L6b1626UtSFIOU JfO +bii be o a w[l iwollo V-U t o oil +boNtgMS.. +J9KIU& fJ6 IU6L19C6 JOL6 &6U61IC Ug 692A fO I26' +(---) i o + o +UgfCJIu&: LJI2 JIUf2 fO f6 1gCf fgf g 2JIUIJg1 J6fJog J1&f 9J2O MOLK +Fu6 2s6 Ot fG J9u&ng&6 Joql MI 162nJf JU gU JubiO6g 6xf-Ig&6 +58 +WODEF2 +F- +s io qlgd gdt diw bvloa llsuau i tsdt 2TT i mldorq ommo +i idT .igbo o s t ld o i d ilep ois b +tiw 2brow wgf s 9ldmumr ot 9lds 2i Igbom 9dt tsdt bruo ud .(2TT) +86U61JC MO1q2 fgf 916 1onUq JU fIfJ62: M6 9J20 fLJ6g f6xf-fO-2b66CJ +f6Xfn9J q62CLbIOU 2b6CI9JI 2Ju& FJ6 &6I6 Ot f6 2Ou& OL JUO16 +gd iw oibus otsm ot Iow iow ot rinoitibron xg briuof +2.3.3Eofom +.0.=J62-pnibb9dm9 +Unw-2f6b2=20" +[x9 m" ]=x +Uo126' +2gwbr6 = woq6'29wbf6( +# 29wbua +ro22 = woq6(x* f6xf=[a f6xf、]* 6wpeqqiua-wg2-blopg=0'1) +# ga +wpeqq-wg-a= +.9=p_pnbb9dm_92 +=_ +i o Ix : +nust +WOT26 +23 IEX-COMDIIIOMV DIEE2IOM +sd30 +models +5.4 +diffusion auto-encoders with latent diffusion +5.4.1 +Motivation +Patching, STFT, and learned transforms can be used to reduce the in- +put length during the diffusion process. Those approaches are advan- +tageous if we want to train a single model end-to-end, however, this is +suboptimal since the waveform is expanded to its original full-length +shape multiple times during sampling, slowing down the process. +A more appropriate way would be to first encode the waveform, +then do the diffusion loop in the compressed representation, never +expanding it to the full waveform until the end of the loop. This +is the idea proposed in [12] (latent diffusion), where a variational +autoencoder is first used to compress images by a few factors to a +smaller latent space, and later diffusion is applied to that latent. By +compressing the audio before applying diffusion, we can drastically +speed up the diffusion sampling procedure, making an important +case for an efficient and good quality autoencoder. +5.4.2 +Method +There are different ways to implement the autoencoder, however an +important property is that we must be able to apply the diffusion pro- +cess to its latent space, hence some sort of normalization is required +to make sure the values are in the range [−1, 1]. Furthermore, the +autoencoder should compress as much as possible without a signifi- +cant loss in quality. The smaller the latent, the faster will be the inner +diffusion model to process and generate. +We experimented with different autoencoders, and found that di- +rectly compressing the waveform can only provide around 2x-4x com- +pression without a significant loss in quality. On the other hand, as we +have discussed in the representation section, compressing magnitude +or mel spectrograms can provide much higher compression rates. The +downside is that the spectrogram requires a model (vocoder) to recon- +struct the original waveform, even from a non-compressed state. +In this work, we propose to use a magnitude diffusion autoencoder, +an encoder (ME1d) first encodes the waveform into a magnitude spec- +trogram which is then encoded into a latent compressed 64x com- +pared to the original waveform, and later uses a diffusion model to re- +construct the waveform conditioned on the latent, acting both as a de- +terministic compressing encoder and a diffusion vocoder at the same +time. In order to make sure the latent space is normalized, we use +a tanh function on the bottleneck. Since the decoding/vocoding pro- +cess is a diffusion model, the waveform can be quickly reconstructed +from the latent by using a small step count, if instead a more accu- +rate reconstruction is desired a higher step count is required. To make + + +.gribrid oibus-txgt boo 2sd brs + +I69J-fIU6 &6U619fOU 2b66q OU g 2IU&J6 CLn' gUg J91&6 COUf6Xf J6U&f' +2·4·3 +q6COq61 fO 6xbsUg F6 I6bL626UfSIOU pSCk fO MgA61OLUU: +q!2O: 2!UC6 f6 I6b626fO 12 gfCJJ COb6226' M6 +fO &6U61gf6 f6 JSf6Uf MIf f6Xf COUqIfIOUI U f6 2fAJ6 Ot JSf6Uf +Lo &f f6 !UgJ Joq6J M6 gbbja g cgecsg& g!tn21ou &eUeifo +I6UOA6 ff6UFOU PJOCK2 9Ug 26 COUAOJTIFOUSJ-OUJA 9ICJIf6CfJI6 +2L6 f6 q!I21OU gIfOGUCOqGI I2 &GUGLIC fO U MSA61OL JGU&' M6 +E&nLG I: D!!2JOU gTfOGUCOGI JUIGLGUCG +oibuA +EI&ILG I3: D!LEIOU STfOGUCOGL fISJUU& +oibuA +IUI TMHTAI HTIW HO-OTUA OIUI +31 +.Igbom roieulib grdt lo tuqri +2u& JUf6IboJSfOU gUg gbb6Ug!u& fGJU g2 ggg!fIOUJ coUf6Xxf fo f6 +boNtM S.. +oib 1 +g 26coUqg b2gJbJG Joq6T (s) f JUCL6g26 fU6 2gbJ6 Igf6 O1 6X- +g JOM 2gbj6 Igf6 ggIO MIf g bLJgia JUog6J gUg JgfGl bagJbJ6 + ie el e () +2gbJI& obGifO: nbegbjG2 c9u p6 26 lOL gG1GUt bnbo262: +gnfO6UCo612' MJ6L6 fJ6 6UCoq!u& 1UCfIOU J2 X6q fO p6 fJ6 qOMU- +1L2' D!tt21OU bagbj612 c9U p6 266 g2 g 2b6cC cg26 Ot qitt21OU +JU& GIO2' LG JUOq6I M6 bIobO26' OMGAGI MOIKe qIIGCIA U MSA6- +fO 6LO fG fob JgJt Ot fG &LIg (O1 IIg&6) 2SLfIU& gf 2OG l1GdnGUcA +Ot 2b6cfLo&Lg2' goMU2gbJ!& g MgA61OL cOLL62boUg2 fo 26ffIu +bIOAIg6g MSA61OLI (6&: ILOU 3KH fO 8KH) EIOU F6 b612b6CfIA6 +MA +LI&G I2: IMO-2g&G gIt2JOu &GUGiSfOL MIf geJOU gGCogG1 +00 +92i0M +ibu +Ewpeqtue +Ewpeqqtua +3s +WODE2 +MSAG1LOU2: VffGUfIOU PJOCK gU JgL&GI COUfGXf JGU&f2 C9U JGJb b. +COMIUf OL JSAGL2) OT fJ6 JUIfI9J COUAOJIFIOU9J PJOCK2 JU f6 -V6f COL- +161 JOgJ!f 29J fO OFG1 JO6J2' UC162I F6 26 (CUJ +etx: M6 tonug nbegiubjG12 fo Gxc6j Ou 2b66cu stg' ga If,2 JkGja gu 69e- +cu &ef AGiA 8oog 16enJfe pa begb!u& guamG16 pefMGG IQx gug +Debeuqi& oU f6 cobj6xif ot f6 qgtg26t qittn2jou bagbj612 +2.2.3E00mfo +f6 I6COU2fLICFIOU 6eb6CI9JIA It b2IbI& 1O AG1 JOM 2gJbJ6 +qlgr ot gorsbig Isroitibbs ae bgbivorq gd e gribbadmg rs 1o +E&L6 I: DI2O bL GGC6 +DoMU29wbr6q +36nb2gwbf6q +.li t +fu6 r8y 2suubj6 Isf6 JGu&ry 9ug 26 fs 29bjiu bloc622 fo 16cou- +DHILI& IUIGLGUC6' M6 IUfGLbOJ9f6 fJ6 JOM 2gJbJ6 CJ9UUGJ2 fO JSfCJ +.b9 +JgfCJ62 f6 Ontbnf Jr&j 29bJ6 CJ9UU6J2 9Ug J6UC6 c9U p6 bLob6IJA + o +EI&6 I: DI2OU beb6L fgIUI& +b9Jqm62nwod +oibuA +2 DIEEIOM LVLE +33 +1 rrb2:\/&frpcoAIDIB&ACM +EI&nI6 IO: D!!LI2JOU AOCOqGI IUIGGUC6 +W6 2b6cfLogw +EI&nL6 I8: D!LeJOU AcoqGL FIJUJ& +92io +CouAILgubo26J +orbuA +J9M J6J +M6 2tgCK fJ6 gqqIfIOU9J cJ9UJ6J2 OU f6 IUbf CJgUU6J2 Ot fJ6 N-V16f' +COUAOJfIOU pgCK fO Ife MSAGIOL 2JsbG: 2IIJSIJa fO fG begbJG +LG q!IL2IOU ACOqGI I2 fIUGg pA IL2f COUAGLFI& FG MSAGIOLIU fO +boNtoM s..2 +gICIf6CfL6 MIf 9JO2f UO CS&6 IUfO JI&-dngJ!fA UIC AOCog6I +fIou' M6 brobo26 g 2bj6 gggbrsfIou fgf 9JJOMe fo fhlU Onl n-M6f +dSJIfA 8KHS I2IC AOCoqi& g16 2I JCKIU& IU f6 tOJOMI& 26C- +-g i - do-- f +J6i& pg26g ocoq612: ig6g Aocoq612 c ioqc6 A61 &oog +f6Ug fo b1ognC6 9Lfl1gCf2' JU9KIu& fJ6 c926 1O cOUOUJA 26g g66b- +i-i 2 do bo viti o .e Isivi on i iof +GAGr biobeuJa fhUI& g 2b6ctlo&igJ pgck fo g bjgagpJ6 gqi MgaG- +C6IA6' J9KI& fJ6U gU Ig69J I6bL626UffOU 1O gHgO &6U6L9fOU: HOM- +- +oitoitoM I.d. +34 +WODE25.7 training info +35 +In order to flatten the spectrogram, we have to match the configu- +ration of the STFT used to obtain the spectrogram, with the configu- +ration of the 1d transposed convolution. The key insight is that the +STFT operation can be viewed as a 1D convolution with large ker- +nel sizes (or window size) of sine and cosine waves, which is then +merged in-place using the absolute value, and later mel-scaled. The +mel-scaling doesn’t alter the temporal positioning, only the frequency +(or channels) of the spectrogram. Hence, if we set large kernel sizes +equivalent to the STFT window length, strides equivalent to the STFT +hop-length, and proper padding, the transposed convolution will fo- +cus on the same context region of the waveform used to obtain the +spectrogram. Similarly, we will set the input channels of the trans- +posed convolution to match the number of channels used for the mel- +spectrogram, and the output channels to 1. Stereo audio is decoded +by batching. We used a window-length/kernel-size of 1024 and hop- +length/stride of 256, similarly to popular vocoders we used 80 mel- +spectrogram channels. With this configuration, the spectrogram has a +default 3.2x compression factor over the initial waveform. +5.6.3 +Evaluation +This model can produce high quality waveform, as with other mod- +els, a good reconstruction of high-frequencies requires more convolu- +tional blocks towards the start of the U-Net. Moreover, we hypothe- +size that increasing the number of mel-channels might increase qual- +ity for two reasons: first, mel-spectrogram would compress less infor- +mation out of the initial waveform, and second, the transposed con- +volution would have more channels to flatten the spectrogram and +hence more capacity. +5.7 +training info +5.7.1 +Data +We trained all of our models on a 2500h mix of audio at 48kHz. In +the text-based model, we used metadata such as title, genre, album +and artist as conditioning information. For the autoencoder, upsam- +pler, vocoder, we trained on random crops of length 218 (∼5.5s at +48kHz). For the long-context text-conditional audio generation model, +we trained on fixed crops of length 221 (∼44s at 48kHz), using the crop +index as additional conditioning information. +5.7.2 +Training +We trained all of our models with AdamW, using a learning rate of +10−4, β1 = 0.95, β2 = 0.999, ǫ = 10−6, and wight decay of 10−3. For + +36 +models +all models, we used an exponential moving average with β = 0.995 +and power of 0.7. We trained all models for around 1M steps with +a batch size of 32, this takes approximately 1 week on a single A100 +GPU for the largest, text-conditional model. + +6 +F U T U R E W O R K +While our models can have a good generation quality on short few- +second segments, or a good structure with longer segments, training +an efficient model with both high quality and long context remains +an open problem. A few promising future modelling approaches that +need more experimentation include: (1) train diffusion models using +perceptual losses on the waveforms instead of L2, this might help to +decrease the initial size of the U-Net, as we wouldn’t have to pro- +cess non-percieveable sounds, (2) stack multiple upsamplers to gen- +erate a song top-down from low-sample rates to high sample rates, +(3) improve the quality of the diffusion autoencoder by using mel- +spectrograms instead of magnitude spectrograms as input, (4) other +types of conditioning which are not text-based might be useful to nav- +igate the audio latent space, which is often hard to describe in words +- DreamBooth-like models [14] could be used to assign symbols to +sounds, (5) compress mel-spectrograms to a quantized representation +with diffusion autoencoders to allow for high compression ratios and +later train an autoregressive transformer on top of that. +Other simpler improvements on the current models include: (1) in- +crease the training data from 2k hours to 60k-100k hours, (2) use +more sophisticated diffusion samplers to get higher quality for the +same number of sampling steps, (3) for text-based models, use larger +pretrained language to obtain embeddings, which has been shown to +be very important for quality in [15]. +37 + + +7 +C O N C L U S I O N +Generating high-quality audio efficiently is a challenging task as it in- +volves the generation of numerous values to accurately represent the +sound waves, especially when aiming for high-fidelity stereo sound at +a sample rate of 48kHz. In this work, we proposed different methods +and models to generate high quality audio from a textual descrip- +tion. From models targeting long-context audio with an emphasis on +structure, short-context with an emphasis on quality, to other useful +models such as the diffusion upsampler and vocoder. We introduced +a new method that utilizes text-conditional diffusion models based on +1D U-Nets, allowing for the generation of multiple minutes of 48kHz +audio on a single consumer GPU. Furthermore, we have provided a +collection of open-source libraries to streamline future research, in- +cluding potential improvements in audio autoencoders and diffusion +models. +39 + + +B I B L I O G R A P H Y +[1] +Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, +Olivier Pietquin, Matt Sharifi, Olivier Teboul, David Grangier, +Marco Tagliasacchi, and Neil Zeghidour. AudioLM: a Language +Modeling Approach to Audio Generation. 2022. eprint: arXiv:2209. +03143. +[2] +Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook +Kim, Alec Radford, and Ilya Sutskever. Jukebox: A Generative +Model for Music. 2020. eprint: arXiv:2005.00341. +[3] +Jonathan Ho, Ajay Jain, and Pieter Abbeel. “Denoising diffu- +sion probabilistic models.” In: Advances in Neural Information +Processing Systems 33 (Dec. 2020), pp. 6840–6851. +[4] +Jonathan Ho and Tim Salimans. Classifier-Free Diffusion Guid- +ance. 2022. eprint: arXiv:2207.12598. +[5] +Diederik P Kingma and Max Welling. Auto-Encoding Variational +Bayes. 2013. eprint: arXiv:1312.6114. +[6] +Troy Luhman and Eric Luhman. Improving Diffusion Model Effi- +ciency Through Patching. 2022. eprint: arXiv:2207.04316. +[7] +Ozan Oktay et al. Attention U-Net: Learning Where to Look for the +Pancreas. 2018. eprint: arXiv:1804.03999. +[8] +Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Si- +monyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew +Senior, and Koray Kavukcuoglu. WaveNet: A Generative Model +for Raw Audio. 2016. eprint: arXiv:1609.03499. +[9] +Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. +Neural Discrete Representation Learning. 2017. eprint: arXiv:1711. +00937. +[10] +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sha- +ran Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. +Liu. Exploring the Limits of Transfer Learning with a Unified Text- +to-Text Transformer. 2019. eprint: arXiv:1910.10683. +[11] +Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, +and Mark Chen. Hierarchical Text-Conditional Image Generation +with CLIP Latents. 2022. eprint: arXiv:2204.06125. +[12] +Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick +Esser, and Björn Ommer. High-Resolution Image Synthesis with +Latent Diffusion Models. 2021. eprint: arXiv:2112.10752. +[13] +Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: +Convolutional Networks for Biomedical Image Segmentation. 2015. +eprint: arXiv:1505.04597. +41 + +42 +bibliography +[14] +Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael +Rubinstein, and Kfir Aberman. DreamBooth: Fine Tuning Text-to- +Image Diffusion Models for Subject-Driven Generation. 2022. eprint: +arXiv:2208.12242. +[15] +Chitwan Saharia et al. Photorealistic Text-to-Image Diffusion Mod- +els with Deep Language Understanding. 2022. eprint: arXiv:2205. +11487. +[16] +Tim Salimans and Jonathan Ho. Progressive Distillation for Fast +Sampling of Diffusion Models. 2022. eprint: arXiv:2202.00512. +[17] +Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, +and Surya Ganguli. Deep Unsupervised Learning using Nonequi- +librium Thermodynamics. 2015. eprint: arXiv:1503.03585. +[18] +Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising Dif- +fusion Implicit Models. 2020. eprint: arXiv:2010.02502. +[19] +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, +Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polo- +sukhin. Attention Is All You Need. 2017. eprint: arXiv : 1706 . +03762. + diff --git a/AdFQT4oBgHgl3EQfMTYr/content/tmp_files/load_file.txt b/AdFQT4oBgHgl3EQfMTYr/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..443f2974c0c461dc33e1161c69502f03a69bf37f --- /dev/null +++ b/AdFQT4oBgHgl3EQfMTYr/content/tmp_files/load_file.txt @@ -0,0 +1,754 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf,len=753 +page_content='A R C H I S O U N D : A U D I O G E N E R AT I O N W I T H D I F F U S I O N flavio schneider Master’s Thesis Supervised by Zhijing Jin, Prof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Bernhard Schölkopf ETH Zurich January 2023 A B S T R A C T The recent surge in popularity of diffusion models for image gener- ation has brought new attention to the potential of these models in other areas of media generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' One area that has yet to be fully ex- plored is the application of diffusion models to audio generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Au- dio generation requires an understanding of multiple aspects, such as the temporal dimension, long term structure, multiple layers of overlapping sounds, and the nuances that only trained listeners can detect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In this work, we investigate the potential of diffusion models for audio generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' We propose a set of models to tackle multiple aspects, including a new method for text-conditional latent audio dif- fusion with stacked 1D U-Nets, that can generate multiple minutes of music from a textual description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' For each model, we make an effort to maintain reasonable inference speed, targeting real-time on a single consumer GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In addition to trained models, we provide a collection of open source libraries with the hope of simplifying future work in the field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Samples can be found at bit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='ly/audio-diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' iii C O N T E N T S 1 introduction 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Audio Generation 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Challenges 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Existing Methods 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4 Research Questions 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='5 Contributions 4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Models 4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Libraries 4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='6 Structure of the Thesis 5 2 audio representation 7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Desirable Properties 7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Compressibility 7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Decodability 7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Diffuseability 7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Waveform 8 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Spectrograms 8 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 STFT 8 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 MEL 10 3 existing diffusion methods 11 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 DDPM-Diffusion 11 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Noising (0 → t) 12 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Denoising (t − 1 ← t) 13 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Training Objective 13 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4 Sampling 14 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='5 Limitations 14 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 DDIM 14 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 V-Diffusion 15 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Noising (0 → σt) 15 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Denoising (σt−1 ← σt) 16 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Training Objective 16 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4 Sampling (σ0 = 0 ← · · · ← σt−1 ← σt = 1) 16 4 architectures 17 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Our a-unet Library 17 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Background of U-Net 17 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 U-Net Block 17 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Items 19 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4 Plugins 20 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Our audio-encoders-pytorch Library 21 5 models 23 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Overview 23 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Diffusion Unconditional Generator 23 v vi contents 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Motivation 23 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Method 23 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Diffusion Method 24 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4 Transforms 25 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='5 Usage 26 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='6 Evaluation 27 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Text-conditional Diffusion 27 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Motivation 27 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Method 28 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Evaluation 29 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4 Diffusion Auto-Encoders with Latent Diffusion 30 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Motivation 30 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Method 30 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Evaluation 31 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='5 Diffusion Upsampler 32 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Motivation 32 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Method 32 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Evaluation 33 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='6 Diffusion Vocoder 34 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Motivation 34 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Method 34 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Evaluation 35 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='7 Training Info 35 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Data 35 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Training 35 6 future work 37 7 conclusion 39 bibliography 41 1 I N T R O D U C T I O N Music is an art of time at the intersection of fine-grained perception and symbolic patter recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In this work, we will investigate the use of diffusion model to generate music, or more broadly audio, in order to gain a deeper understanding of this intersection using modern deep learning diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 audio generation Audio generation refers to the process of automatically synthesizing novel waveforms using deep learning models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Audio generation has been commonly approached in two different ways: symbolically or at the waveform level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Symbolically generating audio involves creat- ing a representation of the audio using symbols, such as MIDI data, which can then be converted into an audio waveform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This method is often easier to work with, but it can be difficult to capture all the nuanced details of a sound using symbols.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Waveform-based audio generation, on the other hand, involves generating the raw audio waveform directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This method is more complex, due to the sheer amount of values that have to be generated per second, but it allows for a more precise and detailed representation of sound, that includes all of its intricacies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Furthermore, audio generation can be uncondi- tional or conditional.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Unconditional models are trained only on audio data and are able to generate new samples without any additional in- put.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Conditional models, on the other hand, are trained on pairs of audio data and some kind of conditioning information, such as a text description, genre label, lyrics, speaker id, or some other description of the audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' At inference time, this conditioning information can be used to guide the generation of novel audio samples that match the desired characteristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In this thesis, we will explore methods of con- ditional and unconditional waveform-level generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 challenges Multiple tradeoffs have to be considered when generating audio at the waveform level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' To generate a single second of high quality 48kHz stereo audio, 96000 values must be generated, which is comparable in size to a medium resolution image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' If the goal is to generate an entire song (hundreds of seconds) maintaining high-quality and a rea- sonable generation speed, this task becomes much more challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' A common approach to generating long audio sequences is to do so 1 2 introduction in chunks, however, if the context length, or the amount of audio that the model can consider at any given time is not sufficient, the result- ing structure may not be consistent over multiple seconds or minutes of generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' A longer context may allow for more consistent coarse structure, but may also lead to lower overall quality of detail or vice- versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 existing methods In this section, we will review some of the most well-known or influ- ential waveform-based methods that have been developed to date.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' One of the pioneering waveform level generation models is WaveNet (2016 [8]), a fully convolutional architecture that exploits dilated con- volutions with various dilation factors in order to capture a large con- text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' It’s able to synthesize a few seconds of both speech and classical piano music at 16kHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Jukebox (2020 [2]) uses multiple quantized au- toencoders to discretize sounds at 3 different resolutions, followed by a cascade of transformer upsampler models to generate the quantized representations autoregressively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Jukebox is able to generate 44kHz music conditioned on lyrics, artists and genres.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' The stack of trans- formers trades-off generation speed for structure and quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Audi- oLM (2022 [1]) uses a (residual) quantized autoencoder to compress the waveform into discrete tokens and a semantic encoder, later a cas- cade of transformer decoders (semantic, coarse, fine) is used to gener- ate 16kHz audio continuations top-down from the semantic represen- tation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Musika (2022) trains a set of 1D convolutional autoencoders to compress log-magnitude spectrograms, and a vocoder to reconstruct both phase and magnitude from the compressed representation, us- ing a 2D GAN discriminator trained on sequential chunks of audio exploits this process autoregressively to generate longer sequences of 44kHz audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This method has a limited context length, but is very efficient given the 1D structure of convolutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Riffusion1 (2022) fine- tunes the Stable Diffusion model [12] on chunks of mel-spectrograms of 5s at 44kHz, and uses style transfer to generate multiple coherent concatenated images while conditioning on a textual description of the song.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This method has a limited 5s context length, and trades off speed given the large 2D architecture, but works surprisingly well considering that the original model is trained on images, not audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4 research questions Diffusion models have recently demonstrated exceptional capabilities in the field of image generation [11, 12], leading to an explosion of incredible AI generated art 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Iteratively removing small amounts of 1 https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='riffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='com/about 2 https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='midjourney.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='com/showcase/ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4 research questions 3 noise from pure noise allows diffusion models to hallucinate novel samples that have common attributes to the data in the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Compared to GANs, diffusion models in the image domain don’t suffer from training instability, scale well with parameter size, and have good mode coverage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' As long as the training data can be progressively corrupted from a clean to a fully covered state, diffusion models have the potential to be applied to multiple domains to generate novel samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This opens up a wide range of possibilities beyond image generation, including video and audio generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In this thesis, we explore the potential of diffusion models for audio generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' We will explore whether diffusion models can be used on audio as effectively as with images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' The aim is to generate high- quality 48kHz stereo audio as efficiently as possible and to control the generation in different ways, with a focus on text-conditional audio generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 4 introduction 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='5 contributions 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Models We introduce the following models, some of which are/will be acces- sible in the archisound library: Long: a latent diffusion model for text-conditional music genera- tion that is capable of generating audio with an extended con- text of multiple minutes at 48kHz, targeting context length and structure (∼857M parameters).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Crisp: a text-conditional audio generation diffusion model with a context of tens of seconds at 48kHz, targeting simplicity and high-quality waveforms (∼419M parameters).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Upsampler: a diffusion model to uspsample music from 3kHz to 48kHz (∼238M parameters).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Vocoder: A diffusion model to reconstruct 48kHz waveforms from 80-channel mel-spectrograms, variable input length (∼178M parameters).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Libraries Moreover, we open-source the following libraries, on which previous models are based: archisound3, our library including trained models ready to use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This repository doesn’t contain any modelling code, but acts as a wrapper and documentation for our models hosted on Hug- gingface 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' audio-diffusion-pytorch5 (ADP), the main library including the proposed audio diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This library has both a-unet and audio-encoders-pytorch as dependencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' At the time of writing, this library has 550+ stars on GitHub, and has been downloaded more than 50000 times on pip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' a-unet6, a highly customizable library to build U-Net architec- tures in any dimension, expansible with multiple blocks and plugins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This library can be used for any type of grid data: 1D, 2D, 3D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' audio-encoders-pytorch7 (AEP), a set of encoders and autoen- coders for 1D data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 3 https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='com/archinetai/archisound 4 https://huggingface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='co/archinetai 5 https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='com/archinetai/audio-diffusion-pytorch 6 https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='com/archinetai/a-unet 7 https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='com/archinetai/audio-encoders-pytorch 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='6 structure of the thesis 5 Some additional libraries we open-soruce that are not documented in this thesis, but might nevertheless be interesting to the reader, in- clude: cqt-pytorch8 for invertible CQT spectrograms using NSGT, and bitcodes-pytorch9 a method for vector-quantization into binary codes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='6 structure of the thesis In Chapter 2, we present the various audio representations and pro- vide a set of tradeoffs that must be considered when selecting an ap- propriate representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In Chapter 3, we describe the general prin- ciples of diffusion and then delve into the specific diffusion methods that we have tested.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In Chapter 4, we examine our custom architec- tures, including the U-Net and autoencoder, and provide detailed de- scriptions of each component and how they can be easily integrated into our library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In Chapter 5, we propose a range of diffusion models that combine the diffusion methods from Chapter 3 with our custom architecture from Chapter 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Finally, in Chapters 6 and 7, we discuss potential future work and present our conclusions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 8 https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='com/archinetai/cqt-pytorch 9 https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='com/archinetai/bitcodes-pytorch 2 A U D I O R E P R E S E N TAT I O N In the following section, we will introduce the different types of au- dio representation that we can choose from, and compare the differ- ent tradeoffs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Before that, we’ll have a look at the different desirable properties that should be considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 desirable properties 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Compressibility We define compressibility as the approximate number of values per second needed for high-quality audio compared to the original wave- form, and how many can be easily removed without a significant loss in fidelity, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' by applying a convolutional only autoencoder on the representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Perceptibility Perceptibility implies how close is the representation to human hear- ing, this part is important since if we are compressing a representa- tion that carries a lot of information we are not able to perceive in the first place we will lose a lot of useful capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' More specifically, humans hear sound in the range of frequency from 20Hz to 20kHz, on a logarithmic scale, which means that the frequency resolution decreases as we approach 20kHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Decodability Decodability refers to how simple and fast is to decode the given representation back to the waveform domain that can be reproduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Diffuseability Diffusability is a set of desirable properties that are important in or- der for a diffusion model to be applicable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In particular, (1) the values should be approximately in the range [−1, 1], (2) the signal should ide- ally have some inductive biases that can be exploited by the network (primarily 1D or 2D convolutional blocks), (3) time-shift invarance if we are doing inpainting or autoregressive generation, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=" the repre- sentation should look the same at different time steps for the same 7 J6&' EgC COJJJU 16bI626Uf2 g 2JU9JI CJUK Ot fJ6 O11&JU9J MA6- fuG UnupGI Ot csUuGje' E 1e fuG Unp6I Ot ednGucI6e gug F Je fuG 26Uf gnqiO JUfO g cOIb]GX-A9JnGq fGU2O1 Ot 2gb6 [C'L I] MG16 C Je t W (TT) - T- roa(I : ) to f dtiw blse TT Igndo lgi o q eigemi bne Is : gg TIT2I." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="S S'3 2LECLKOCVW2 IgbIgJA OA6I fIJU6' JGUC6 f6 MJII fAbIC9JJA JSA6 JS1&61 g!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="1616UC62 COU2fLICf f9U JOM 116d6UC162: H& 16d6UC162 f6Ug fO Ag1 JO16 FS JO22 InUCfIOU OU MSAG1OLUe' JI& 1ed6UCI6 MJJI p6 SIgGL fO 16- 62f coaiq61 b6icebfp!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='f: IU igcf It M6 gbbj g 2fgqgig Fi 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='126 qiL6CI gUg fgf MgA61OLI gL6 g pg2IC 6b1626SfOU MIC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='26dnGUC62 fG qi2gqASUg62 SI6 FuSt JOu& 26dnGUC62 916 2JOM fO qlt- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="ode lof ldeauib lie9 'fi bn bvlovi i iboogb o dt i " metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='Ot 2f6160 gq1o (+8000xs): M6 p261A6g fgf Mif g 2f9ugig ID coU- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='leix e i Exxa e ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='gi ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="Jsi&6: e g &oog cobg2o' fG Jnp6 Ot Agj6e Ot g 2fguggg " metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='Vi9 9d Iliw a9 on ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='oibs ilsp-gid o aimabro9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="2OL MII COUf9IU I 26COUg Ot gHqIO' It M6 MgUf fO &UG1gf6 JJfbJ6 " metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="It 1 = 48000 Ug M6 9I6 LGbI626UIU& ghqiO 9f 48 KHE' FUGU FJ6 f6U- " metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='ghqio) gug 1 fe nupel ot bojure eeg fo iebi62euf fue gnqio cg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=" o = l dm [a S'S MVAEEOBW 2onUg' (+) FJ6 ASJnG2 2onJg Of gA6 fOO JUgua AgJ62 IU f6 fIU6 q!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='- VNDIO ELKEEMIVIIOM 8 M i e qICf 6 bg26 OI q!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='L6CIA fJG MSAGULOU LO fJG Jg&UIfng6 JUIOL- 9Jfo&6f61 gUg fo fIgIU gU ggg!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='foUSJ JUog6J (c9JJ6g Aocog6L) fo b16- bjoif qo of 6saa gbbja: coou bigcjc6 Je fo qiecsig fue buga6 Ug GUC6 fG IUCA6 pIg262 Ot 2bgFI9I JOC9JIfA fgf COUAOJIfOU 6X- ol t ig liti T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=" A61 2gI JO22 I dJf OU f6 of6 gug bg26 e A61 gig fO Jg&uinq6 ebeco&ig cgu p6 6gea cobi6e26q b fo 3sx MIfy Fanl6 S: Wsafqe abecoaisu gug bsee ot g euaj6 cusuel 2LEI ge- (x(tr)) s 9rugi mi rwore 2 gLCfS α(X(t『)) guq bjg26 96 q6uU6q g2 waa(x(t r)) := lx(t r)l- 9Uq bμe(x(t r)) : fO 69J 9Ug JUg&JUga bgLfe' gUg J6UC6 pgCK fo MgA6uLo' Wg&uIfng6 IUfo JgIfn6 gU bg26' g 16bi626UfgfOU fSf c9U p6 JA66g pgK ." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='bgi9i li9 GCIGJA 2I& f6 S2f ONG IOL (EE) C 9J2O G FJG 2J&USJ MIF I KGLUGJe Ot JGu&t M LG 2LEL CSU p6 CObrIIGq lovo i d blow o 2IUG MAGe tOI f6 J&USIA OUG SKI& FG GSJIg&JUSI 2LEL b16g61!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="U6g Jf612 cObo26g Ot Co2JU6 MSA62 1OL fJ6 L69J bgLf' 9Ug 92 M: JUfG1G2fIU&X MG C9U COU2Ig6I fJ6 2LEL 92 ID-COUAOJNFIOU MIfJ MJG16 M J2 g MJUqoM tnUCfIOU Ot 2ugb6 M fugf 12 26 fo J2ojfG fuG (s) 2l上l[x)(tr) =X(r) (1) X-xI2 16b1626Uf2 f6' LJ6 2LEL 9f g bo1Uf (t ) 12 q66g g2: 1O1 M616 F6 X-gx12 16b1626Uf2 JIU691JA 2c9J6g 116dn6UC162' gUq f6 S3 LECLEOCKV2 Q IUAGL2IOU 9Ug bJg26 IGCOU2fLNICfIOU JU OUG &o: l od oo o o d L L6CoG1 f6 J!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="6J 2c9J6g Jg&UIfg6 bCfLo&ige' obIIIgOU ." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='foitofi o ol oi l il t t d lditi \'i CJO1C6 JU gqiO gbbJIcgfIOU2\' LG qi2gqAgUtg&6 Ot f6 J6J 2c9J6 12 fgf C6bfIOU fJSU g JU69IJA 2C9J6g 2b6CfIo&1gJ\' JSKIU& If 9 AG1A COJUJUOU 1 iv i oiti H Isos i st rts boa o oi bvio t o baed i gle rltireol Isioe eidt gle Ie gt ( + orgo d E&6 3: WE-2cJG 2becftog" gfq 2c9JGg Mi roa() to AepI- IHMS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="S b 9t tig lib 9i 9 bilo q SUe geI&I2 1edUCIG2 OU g Jo&gIfIC 2C9J6 MIC I2 2fI Of f6 Jgig fO cOUub1622 l1O f6 6g2^ fo cOb1622 bgLf2: 2626' COUA6LfI& fJ6 f19U21OLJ f Jg&UIfg6 gUg bg26 g126Uf9U&J62 21UC6 g JOf Ot 19UqOUU622 12 2fJI b1626Uf I f6 q6f9JJ2 Ot fJ6 SLig: IU g IGbLG2GUTSfIOU' VNDIO KELKEEMIVLIOM 103 E X I S T I N G D I F F U S I O N M E T H O D S Diffusion models," metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' first proposed in [3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 17] are most commonly imple- mented with U-Net [7,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 13] that is repeatedly called during inference for each denoising step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Since the same network is called multiple times during sampling, the weights are shared, making it a recur- rent model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Since the data can be progressively corrupted from a clean to a fully covered state, we can use this trick to jump to any intermediate noise level and denoise a single step, backpropagating only once during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' From the perspective of recurrent models, (forward) diffusion allows us to recover the memory at an intermedi- ate state (which we can see as the corrupted datapoint) without the need to backpropagate the entire chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This is a useful technique for efficiently generating intermediate states, and has the advantage that it can be highly parallelized during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Compared to recur- rent models, the memory state is predefined by the (noise) corruption process and not fully learned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Diffusion exploits very similar princi- ples as autoregressive transformer models [19], namely a highly par- allelizeable training process and repeated network call with weight- sharing during sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Compared to other generative models like GANs, diffusion models are easier to train and don’t suffer from in- stability problems arsing from having to coordinate a generator and discriminator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Diffusion models are a category of powerful generative models first introduced in [17] (2015), and later popularized in [3] (2020), thanks to the impressive results obtained in image generation on CIFAR10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In this section, we will examine different diffusion methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' First, the seminal DDPM [3] method, which involves training the diffu- sion process with a finite number of denoising steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Following that, DDIM [18] introduces a few changes that generalize DDPM to an arbi- trary number of steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Then we will introduce V-diffusion from [16], a continuous diffusion method that aims to improve the mixing of the signal-to-noise ratio from DDIM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' For DDPM and V-diffusion, we will highlight the most important operations, namely: (1) noising the original datapoint (signal) to a desired noise, (2) denoising a single step with the use of our (trained) network, (3) the training objective used, and (4) a sampling technique that repeatedly applies (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 ddpm-diffusion DDPM [3] is one of the seminal works in diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' The method starts by assuming that xxx(0) 0 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' ,xxx(D) 0 is a dataset of D i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=" points 11 MJGIG ef M(O' I) (2) Xf qGAIgfOU M6 9U 6g2JJA 2gbJ6 xf f6 JO12 A6121OU Ot ×o~ g2: M616 f := II=( -f) q6b6Uq2 OU 9JI f 26J6Cf6q IU d(×f I×f-): ×|hf :=^fx0f;" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='=(-f)I ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='d(xf |x0) := ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='(4) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=':es bgtslumrof roitudirtaib ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='fG L6bgLGUGfLISSfIOU fLICK If C9U p6 2OMU fSf FJI2 J2 9J2O g JOLIU9J ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='JGaGj f Lia bioceqrL6 1e cJJeg f toMgig qilejOu bioc: eue ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="Msa o qiLCIa Inb o UO126 JGG o' (om CJ6S stboTt) fO UO1e6 " metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="B 2I f6 b16e g2nbOUa' M6 c q61I6 d(xf I xo)~ I'6' g " metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="3'I'I Voe& (0→ f) " metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='tioq uoivi t ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="C9JJ6 00AONC6 2C6NJ6' MIC COUFOI f6 IUCL6926 IU JO126 JGA6I LOU " metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='b6Ug6Uf OU fJ6 b16AOn2 bo1Uf 9Ug 2OUU6 ab6tbg19J61612 ↓: : : ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='fO 29UbJ6 g JOLU9J gI2fLJpHIfIOU MIfJ fJ6 69U 9Ug COUASLISUC6 g6- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='F6 JO126 J6A6J O1 OnIL qggboIUf ×f-1 p OU6 2f6b fO J6A6J f M6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="JI A6 M(×f I hrf = ^I -fxf-」f = fI): IU MOLq2' I M6 M9Uf fO JUCI6926 : (-x I x)p edt bns (T 1o mumixem o mor Igvl 9io f 2gbj6g g UKoM g2Jpfou d(xo)(f6 2p2cbf Iugicf62 EI&M6 2: D!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="2JOU JU1616UC6' oibuA EI&n6 : D!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='L2IOU fIUIU&.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' oibuA IS EXIIИ DIEEIOИ WEHOD3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 ddpm-diffusion 13 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Denoising (t − 1 ← t) The reverse process distribution q(xxxt−1 | xxxt) is also a normal distri- bution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' However, it cannot be directly estimated as it depends on the entire dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Instead, we train a neural network with parameters θ as an approximation: pθ(xxxt−1 | xxxt) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='.= N (xxxt−1 | µµµθ(xxxt), Σθ(xxxt)) (6) If our model is trained properly, similarly to the forward process, we will be able to carry out a single denoising step by sampling the normal distribution using the learned mean and variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Training Objective To train our model, we need a handle on the true mean and covari- ance of the reverse process q(xxxt−1 | xxxt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' As we have seen before, this is not directly tractable, however, if we include additional informa- tion about either xxx0 (the true data point), or ǫǫǫt (the noise used to get xxxt from xxx0 in the forward process) we can compute a different but tractable auxiliary distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In the case where xxx0 is given, the distribution is: q(xxxt−1 | xxxt,xxx0) = N � xxxt−1 | ˜µµµ(xxxt,xxx0), ˜Σ(xxxt,xxx0) � (7) With mean ˜µµµ(xxxt,xxx0) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='.= √1−βt(1− ¯βt−1) 1− ¯βt xxxt + √ ¯βt−1βt 1− ¯βt xxx0 and covariance ˜Σ(xxxt,xxx0) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='.= 1− ¯βt−1 1− ¯βt βtI, as shown in [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' To train our network, we will then minimize the divergence between this tractable distribution and the distribution estimated with our model: Lt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='.= DKL [q(xxxt−1 | xxxt,xxx0) || pθ(xxxt−1 | xxxt)] (8) = Exxx0 � 1 2 ∥Σθ(xxxt)∥2 2 ∥˜µµµ(xxxt,xxx0) −µµµθ(xxxt)∥2 2 � (9) Which amounts to a simple L2 loss between the auxiliary mean, and the true mean estimated by the model, with some extra scaling factor that is dependent on the covariance, in [3] the covariance is fixed to Σθ(xxxt) = βtI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' A more rigorous argument using variational inference can be applied to show that this is a lower bound of the negative log-likelihood of the data distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' More concretely, our model fθ will output an estimated mean given the noisy datapoint and the noise level as input: µµµθ(xxxt) = fθ(xxxt, t), which we can then use to sample the next xxxt−1 from a normal distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' If instead we assume ǫǫǫt is given, we can follow a similar procedure to get the loss Lt: Lt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='.= DKL [q(xxxt−1 | xxxt,ǫǫǫt) || pθ(xxxt−1 | xxxt)] (10) = E � β2 t 2βt(1 − ¯βt) ∥Σθ(xxxt)∥2 2 ∥ǫǫǫt − ǫǫǫθ(xxxt)∥2 2 � (11) 14 existing diffusion methods In this case our model will estimate the noise instead of the mean of the datapoint xxxt, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' ǫǫǫθ(xxxt) = fθ(xxxt, t), however we can still recover the mean as: ˜µµµ = 1 √1−βt � xxxt − βt √ 1− ¯βtǫǫǫt � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Empirically, it has been shown in [3] that the objective can be simplified further by ignoring the scaling factor: Lt = Eǫǫǫt � ∥ǫǫǫt −ǫǫǫθ(xxxt)∥2 2 � (12) The final objective function to train the model is then computed with random noise levels t sampled from a uniform distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' L .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='.= Et∼[1,T][Lt] (13) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4 Sampling Sampling in DDPM is very straightforward, we start with xxxT ∼ N(0, I) and recursively call the model T times using at each step the esti- mated means µµµθ(xxxt) (or noises ǫǫǫθ(xxxt)) of the T normal distributions to get each subsequent sample: xxxT−1 ∼ pθ(· | xxxT), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' , xxx1 ∼ pθ(· | xxx2) , xxx0 ∼ pθ(· | xxx1) where xxx0 will be our generated output data point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Note that this is a stochastic sampling process, since at each step additional noise is added from sampling the normal distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='5 Limitations This method requires on the order of hundreds of sampling steps to get good quality samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Compared to more modern methods that follow, the number of steps T is a fixed hyperparameter both during training and sampling, limiting its flexibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 ddim DDIM [18], is another seminal work for diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' By intro- ducing a few changes to DDPM, the number of sampling steps used during inference can be dynamically changed while maintaining the same training procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This allows to sample between x10 and x100 faster, and to trade speed for quality at will.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' A direct implication of having a variable number of steps during sampling is that we can train with very large T, or even infinitely large T, leading to a contin- uous diffusion process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' The idea of DDIM is that if we know both xxx0 and xxxt, we can use q(xxxt−1 | xxxt,xxx0) to sample xxxt−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' There are two pos- sibilities, either train our network to predict directly (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' no sampling) xxx0, or train our network to predict the noise ǫǫǫt (as done in DDPM) that combined with xxxt can be used to infer xxx0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' A key observation is that using this alternative method doesn’t change the training objec- tive, as the objective only depends on the backward diffusion process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 v-diffusion 15 Importantly, we can use a different forward process to recover the next step, for example use q(xxxt−2 | xxxt,xxx0) to jump directly to xxxt−2 instead of xxxt−1, essentially skipping a sampling step and speeding up the process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' If we make the time-step continuous, we can jump to any intermediate step in (0, t].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Even more interestingly, this con- tinuous sampling procedure can be viewed from the lens of ordinary differential equations, allowing us to use a variety of existing sam- plers, like the basic Euler methods or more advanced ODE samplers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 v-diffusion V-diffusion, or v-objective diffusion [16], is a diffusion method in- spired from DDIM, trained with a continuous value σt ∈ [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This is the method we found to work best on a variety of audio tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In v-diffusion, if σt = 0 then xxxσt represents a data point xxx from the data distribution and if σt = 1, it will be Gaussian noise ǫǫǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In DDIM we can choose to either use the model to predict xxx0, or use it to predict ǫǫǫt, in this case however, a velocity value vvvσt is estimated from which both xxx0 and ǫǫǫσt can be inferred.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Noising (0 → σt) �� ��� �� ��� ��� ��� � Figure 6: V-Diffusion semicircle The noising process uses a weighting on a circle: xxxσt = ασtxxx0 + βσtǫǫǫ (14) Where ασt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='.= cos(φt), and βσt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='.= sin(φt), where φt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='.= π 2 σt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' When σt = 0, then xxxσt = xxx0, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' no noise is added, if instead σt = 1, then xxxσt = xxx1 = ǫǫǫ, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' only noise ǫǫǫ ∼ N(0, I).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Intuitively, using the weighting on a circle makes sure that as we move σt linearly from 0 to 1 the noising process slowly removes information from xxx0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' By sampling a random σt ∈ [0, 1], we are more likely to pick a value that resembles xxx0 instead of pure noise ǫǫǫ, meaning that the model will more often see data with smaller amount of noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Empirically, this has been shown to be beneficial over standard DDIM diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 16 existing diffusion methods 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Denoising (σt−1 ← σt) To denoise a from noise level σt to noise level σt−1, we can use our velocity-estimating model ˆvvvσt = fθ(xxxσt, σt), note that the velocity here is defined as the derivative vvvσt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='.= ∂xxxσt σt , i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' how much does the datapoint change with a small change in the noise level σt (see circle figure).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' As mentioned before, using an estimate of vvvt, we can obtain both xxx0 and ǫǫǫt, which in turn can be used to estimate xxxσt−1 in DDIM style: ˆvvvσt = fθ(xxxσt, σt) (15) ˆxxx0 = ασtxxxσt − βσt ˆvvvσt (16) ˆǫǫǫσt = βσtxxxσt + ασt ˆvvvσt (17) ˆxxxσt−1 = ασt−1 ˆxxx0 + βσt−1 ˆǫǫǫt (18) In the previous equations, the first 3 lines show how to recover the clean datapoint xxx0 and the noise ǫǫǫt from vvvt, and the last line, remixes the noise with the initial datapoint to get xxxσt−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' The previous equations can be formally obtained by using trigonometric properties on the definition of velocity (as shown in the appendix of [16]), and intuitively understood by rearranging vectors on the semicircle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Training Objective By taking the derivative of the noising formulation, we can compute the true velocity vvvσt = ασtǫǫǫ − βσtxxxσt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' The training objective is then: L = Et∼[0,1],σt � ∥ˆvvvσt − vvvσt∥2 2 � (19) = Et∼[0,1],σt � ∥fθ(xxxσt, σt) − ασtǫǫǫ − βσtxxxσt∥2 2 � (20) (21) Where ǫǫǫ ∼ N(0, I) and xxxσt is computed according to the noising for- mulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4 Sampling (σ0 = 0 ← · · · ← σt−1 ← σt = 1) To obtain a new data point ˆxxx0, some starting random noise ǫǫǫ ∼ N(0, I) is sampled, and the denoising procedure previously demonstrated is iteratively applied over a linear sigma-schedule.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' EI&LG : -V6f PJOCK COL6 CObOUGU Ot fG CIf6CL6 (JgfGg U EI&L6 ): IU O1gG fo pJJg g &6UGIC n-V6f PJOcK If 12 JGC622ga fO I6UF!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="ta fG I'-Vf Brock aei d bqe IU& fO 9Jf61 f6 pg2IC f6UbJS6 2 fgf 6xb6LIUU6UfgfIOU gUg If61gfIOJ Agiefa ot n-efe: Lue Bog Ot g-uef Je fo bioe fue af Jeaej Ot bliud ot old gibliud gldsosd o xodloot 2gbulori sdt isrd il en- bivoq w eoiev gib iw Isb ot ab l t o od t g t ew Is [ l oold oit JUGU2' JUCJq!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="U&: UGM 2KI COUUGCIOU2' COUAOJFIOU9J PJOCK2' 9ffGU- GLU AGL2IOU2 fSf IUCOIbOLgf6 UMIUGLON2 GUJSUC6UGUf2 9Uq IIbLOA6- bom rit it v bvlov ido -U T b bs oe o id ibi JIg&6e' f IU OnI C26 M6 MJJI gggbf If 1O ID COUAOJOU2 U O1q61 gICIf6CfI6 26g SD COUAOfIOU2 fO 6xbJOIf fG 2bgfIg 2LICfL6 O FO JG9LU g bI62GLAG !" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="UG qGF9IJ2 gf JJFbJG LG2OJIfIOU2' LG OLI&IUgJ J6UrgfIOU' -V6fe cOU21ef Ot 9U 6UCoq61 U6fMOLK gUq g q6Coq1 J6f- fAb6 O1 COUAOJIFIOU9J 9ICIf6CfL6 OLI&IU9JJA g6A6JOb6g 1OL Ug86 28- [l V-U fiw bgmlqi lnommo lbom oai 4'I'1Back&tong ot -Vsf I O g-U IBV VBCHILECLNBE218 architectures These include a downsampling block that simultaneously reduces the resolution and number of channels of the input (typically imple- mented with a single convolution)," metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' a stack of customizable processing items (see subsection 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 for details), an inner block that may contain another instance of the block recursively, a second stack of processing items that typically mirrors the first stack, an upsampling block that re- verses the effects of the downsampling (typically implemented with a single transposed convolution), and a skip block that merges the skip connection using some operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Furthermore, we select 3 possible types of conditioning contexts that can be injected in the processing items, namely: a feature-vector based conditioning typically used with diffusion to provide the noise level, an embedding based conditioning that injects multiple embedding vectors as context, typically used for text/CLIP-embedding based conditioning, and lastly a channel-based conditioning used to inject entire stacks of channels in the block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Depending on the task, we might a different combination of conditioning methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' All described characteristics can be defined and customized using the following block: from a_unet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='apex import Block block = Block( dim=1, in_channels=2, channels=4, factor=2, items=[.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='], # Optional items_up=[.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='], downsample_t=.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=', upsample_t=.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=', skipt_t=.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=', inner_block=.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' ) This is a building block for a U-Net, where we can customize the num- ber of input/ouput channels (in_channels), the number of channels post-downsampling, and the downsampling factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' The items list will contain the different items that will be duplicated after the inner block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Optionally, we can change the type of skip connection, down- sampling and upsampling operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' The inner_block can be an- other instance of Block to recursively nest multiple blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Since a U-Net is usually composed of multiple nested blocks where the number of in_channles of the inner block must match the num- ber of channels of the outer block, we provide XUnet as a glue class, and XBlock as a template class for Block to make this process more convenient and automated: from a_unet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="apex import XUNet, XBlock unet = XUNet( procK = XBrocK(' ( +[ + []= [ ." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' A .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='M] = 6 L66qLolmglqIfw 2 " CL022Vf+6U10UIf6W 92 C" XBTOC ) oq q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="n_ mo :wolof oJax ooja b lia d LJG 6XSJUbJ6 COUUPIUSFIOU ILOUU EI&NIL6 8' OI 9UA OFUGI COUUPIUSFIOU' eti t-U :8 gi ." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="O Ot 6&1OU A6CfOL2' gU g IJ6cfIGW (I) tO1 U)6CfI& g 26f Ot 1OAIg6g Gp6qq!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='& AGcfO12 g 66qLoLMg qIGw () 1O1 WIL JIk6 b1Oc622I (C) tOL CIO2e-SfGUOU pef6GU I6&JOU AGCfOLe gg g bLOAIgeg 2f O1 26Jt-ff6UFO LOC622I& IIf 6fM66U I681OU A6CO12" g C L22fGU+TOIfG 1G1GUf CSUU6Je IOAI6g 1G9fIL6 A6CO1 g VfFGufTOuIfGW (v) tOL CG22u& UIf\' g WoqnrgfiouIfew (w) fo gbbja Jogngfiou Ot fG g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='t Ioilovo () m bivo 9w xd t o 3Ife .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="lb gbivo ot oJax 1OLMgiq6g fo 9I PJOck2' Lgig6f612 c9U 9J2o p6 b1OAig6q fo g 2b6cIC 2KIb COUUGCFIOU 2KTb-f IU fG XnM6f' fUSf MJII IU fHU gnfOUSIC9JA 2Kb-f= 1 XBrock(cuguu6r2=je' 9cfol=5' 1f6w2=[-." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='-1)* ([.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=']= = 8=J) ([.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='=2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='= .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='=) p『oc2=[ Tu-cge2=* .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='I=mib YAI - I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='20 architectures Additional customized items can be easily included without alter- ing the template code, making experimentation very simple.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4 Plugins Plugins are used to augment the U-Net model with additional func- tionality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' It’s often the case that we have to wrap the U-Net model with some pre- and post-transformation, or that we have to alter or augment the inputs provided to the U-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In order to maintain a modular structure, plugins can be used to directly modify the U-Net type without having to change the model code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Time Conditioning Plugin The time conditioning plugin is used to convert a floating point value to a conditioning feature vector , this is useful during diffusion to provide the current noise level, or timestep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' To obtain the time feature vector from a floating point value, a learned weight is multiplied by the time information to get a frequency vector that is then processed using a pair of sin and cos to get Fourier features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' The Fourier features are then transformed to a learned feature vector of the desired size by a stack of MLPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This function can be easily added to the base U-Net as: UNetWithTime = TimeConditioningPlugin(UNet) This extends the U-Net with an additional time parameter, which can be one or more floating point values of each batch element.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Embedding Classifier Free Guidance Plugin Classifier free guidance is a method proposed in [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' We provide a ClassifierFreeGuidancePlugin used to increase the conditioning strength of the provided embedding .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' During training, the embed- ding is masked with a fixed (learned) embedding with a small prob- ability in order to ensure that the network is able to generate realistic output without access to any conditioning information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' During infer- ence, the network is called twice, once with the conditioning embed- ding to get ˆye, and once with the fixed embedding used as mask to get ˆym.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' A scaling factor embedding_scale (λ) is then used to guide the network to produce an output that gives more or less importance to the conditioning embedding compared to the masked embedding: ˆy = ˆym + (ˆym − ˆye) · λ (22) This plugin can be easily used by augmenting the U-Net as: UNetCFG = ClassifierFreeGuidancePlugin( net_t=UNet, embedding_max_length=64 ) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 our audio-encoders-pytorch library 21 Later the new UNetCFG model can be called with the additional param- eter embedding_mask_proba to probabilistically mask a batch of em- beddings during training (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' a value of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 will mask 10% of the em- beddings with a fixed embedding of length embedding_max_length), or with an embedding_scale parameter during inference, to call the U-Net twice with and without masking, and apply the scaling factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In both cases, the embedding parameter must be provided as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Text Conditioning The text conditioning plugin augments the U-Net embedding condi- tioning information with a learned text embedding from a frozen pretrained language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' By default, the T5-base transformer model from [10] is used if no embedder is provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' UNetWithText = TextConditioningPlugin( net_t=UNet, embedder=T5Embedder() ) This adds an additional text field to the U-Net forward method that automatically extends the embedding with text embeddings from a pretrained language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 our audio-encoders-pytorch library The autoencoder component has a similar structure to the U-Net, with a few changes: (1) to make it an autoencoder no skip connections will be used, (2) no attention blocks will be used to make it generic to any input sequence length (3) no conditioning blocks will be applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' We open-source the autoencoder library audio-encoders-pytorch (AEP) as a separate library from a-unet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' AEP includes both encoders and decoders, and a set of bottlenecks that can be used to normalize the latent space, namely (1) a variational bottleneck in the style of VAEs [5], (2) a simple tanh bottleneck, (3) a quantizer bottleneck, sim- ilar to the one proposed by VQ-VAEs [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Furthermore, we propose two encoders that encode spectrograms channelwise into a 1D latent, namely a ME1d (magnitude spectrogram only encoder), or MelE1d (mel spectrogram encoder), both compatible with the different bottle- necks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 5 M O D E L S 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 overview In this section we describe various diffusion models and their under- lying structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' We investigate various diffusion models that serve different purposes and functions, including upsampling and autoen- coding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Although these models may have distinct applications, they are ultimately utilized with the goal of audio generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' All of the different models are implemented using variations and combinations of the previously described achitectures (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' U-Nets and auto-encoders).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' The models proposed are implemented in the audio-diffusion-pytorch (ADP) library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 diffusion unconditional generator The diffusion generator is the simplest model we propose to syn- thetize unconditional audio and is implemented with a single 1D U-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Motivation The unconditional diffusion model is a good starting point to test the overall quality of the particular architecture and diffusion method used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' It doesn’t include any type of conditioning, making the dataset and training procedure very simple, and at the same time can give a good idea of the generation quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Method The diffusion generator takes a raw high-quality stereo audio source as input from the datasets, that is then corrupted to a random noise level based on the chosen diffusion method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Using a U-Net, the gen- erator then predicts the output, which may be the denoised input or a value that is used to compute the denoised input, depending on the type of diffusion method employed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' The noise level (usually called time or σ) is provided as conditioning to the network thorugh as an encoded feature vector to tell the network how much noise must be removed from the provided input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' For the diffusion generator nei- ther the embedding conditioning nor the cross attention blocks are used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=" 23 2bJCif 2b66g' gUg 2gbj6 dgJf^ 2oupj gef fo 2gbj6 l1ou' Mif gionug 2o 2gbj!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="u& 2f6be biognc62 rt d of oiaib-v bof W Igvl ebuol i iev tt le 6 Jong2-o1J6g' M6 &t &og 6f2 & gog 1ox Ji&y gaUSIC-Lgu&e' 6AGU MIf ggAgUC6 2gbJIu& Gfoge: It biob oe e a-df g 21b2 Mf g21C 2JbJ61 qnL& IU1616UC6 fo &6U61f6 16920U9pJ6 qitteioU efoqe: Onf ot f6 pox' f6 og6J goueftgf6g &oog I6- Ms Gagjrgteg fue beltouguc ot fue bioboeeg Joqel Mify q!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='leieUf E!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='&G IO: D!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='21OU JUOq6I IUIGIGUC6 WO126 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="fioitudirtib stsb gt MIf AgJ& O126 J6A6J2 fO &6U61g16 g U6M bJg2JPJ6 29bJ6 1OU bglovni vlevitsigti ai t9V-U grlt brs bglqmse i glqmse oibus gri DLJ& IU1616UC6' g 9UqOUU A6CfO1 MIfJ fJ6 29JU6 2Jgb6 g2 9 fL9IU- EI&n6 O: D!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2OU JqGI fIgUI& oibuA 54 ИODE25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 diffusion unconditional generator 25 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4 Transforms Independently of the diffusion method used, this model without any addition struggles to generate more than a few second of sound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' If the raw waveform is provided to the network the initial convolutional blocks of the U-Net will have to process huge samples, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' even a single second of high-quality 48kHz audio requires 48000 values to be processed by the first convolutional block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This can be a speed issue if the audio is not downsampled quickly enough in the U-Net, as the inefficiency will compound over the number of sampling steps of the diffusion process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In addition to that, if attention blocks are used, we will have to downsample enough to make sure that the number of timesteps to be in the range of 1024 or 2048 values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Exceeding that will slow down self-attention drastically due to the n2 computational complexity for sequence length n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Hence, a lot of downsampling is required with long audio samples if we want to satisfy these criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' To mitigate the challenges mentioned earlier, we investigate the use of various methods and audio transforms to convert the raw audio source into a representation that reduces the temporal dimension in exchange for additional channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Patching The first transform is patching, proposed originally for the image do- main in [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' We adapt patching to the 1D domain, where the idea is to group sequential time steps into chunks, that will then be trans- posed to channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Given a patch size p, the length t is reduced by t p where the number of channels increases to c · p, at the end of the U-Net processing the channels are unchunked back to the full length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' We found patching to give drastic speedups, almost at a factor of p for p = 2, 4, 8, 16, 32, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=', allowing to train models with much longer audio sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' However, even if the audio generation quality almost matches the non-patched version, audible aliasing is present with all factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This drawback is likely due to the repeated unchunking pro- cess, which will have a repeating structure, creating a high-frequency sine wave in the signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Furthermore, we found that patching with p ⩾ 64 started to degrade quality, probably due to some capacity constraint in the channel dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' We can think of patching as a deterministic auto-encoding process, with a downsampling factor of p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 STFT The second transform is the previously introduced STFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' We use the common setting of 1024 num fft and window length with 256 hop size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' By wrapping the U-Net with STFT and iSTFT the transform downsamples the length of the audio by 1024 while equally increas- 26 models ing the channel count.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' STFT is implemented with the Fast-Fourier Transform, hence it’s efficient to apply.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' No normalization is required on the spectrogram, since the diffusion loss will still be applied on the reconstructed wave.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This method gives great speedups thanks to the large downsampling, but similarly to patching suffers from degrada- tion in quality compared to the raw wave representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Perceptible noise is present in the generations both when transforming to magni- tude+phase, or when using real+complex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Learned Transform Lastly, we propose a learned transformation with a single convolu- tional and transposed-convolutional block at the start and respec- tively end of the U-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' The transform consists in using a large kernel size and stride of 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This will down-sample the original signal in a single step, trading off small amounts of speed from the determinis- tic patching or FFT implemented STFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' However, since it’s a convo- lutional method, we can choose the number of channels and increase it to a larger value (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 128, double the kernel size and stride) than used during patching, giving more capacity to be resilient to artifacts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' At the same time, we can use ideas from STFT and have large over- lapping windows with learned kernels instead of fixed sine/cosine waves (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' kernel size 128, stride 64, 64 channels, with padding to preserve dimension), which can help to overcome aliasing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' We found this to be the best quality/speed tradeoff method of pre-transforming audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='5 Usage The diffusion generation model proposed is constructed by first adding the LTPlugin to the default U-Net UNetV0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This plugin wraps the U- Net with the previously described learned transform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' After that, we have to provide the U-Net type to the DiffusionModel class which is responsible for constructing the U-Net, the diffusion training method (by default V-Diffusion), and the diffusion sampler (by default DDIM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' from audio_diffusion_pytorch import DiffusionModel, UNetV0, LTPlugin, VDiffusion, VSampler UNet = LTPlugin( UNetV0, num_filters=128, window_length=64, stride=64 ) model = DiffusionModel( net_t=UNet, in_channels=channels, channels=[256, 256, 512, 512, 1024, 1024], factors=[1, 2, 2, 2, 2, 2], items=[2, 2, 2, 2, 4, 4], 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 text-conditional diffusion 27 attentions=[0, 0, 0, 0, 1, 1], attention_features=64, attention_heads=12, diffusion_t=VDiffusion, sampler_t=VSampler ) This model can be easily used to get the diffusion loss for train- ing (which automatically applies the entire diffusion process) or to sample a new element provided the starting noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' # Training x = torch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='randn(1, 2, 2**21) # [batch, channels, length] loss = model(x) # Sampling noise = torch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='randn(1, 2, 2**21) sample = model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='sample(noise=x, num_steps=50) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='6 Evaluation We found that it’s important for quality to have a single non-downsampled block at the start to process the transformed audio at full resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Furthermore, attention blocks are crucial for temporal consistency of the generated audio, but can only be applied after the original wave- form is down sampled to around 1024-2048 length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' For example, if the original audio has length 219 (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' ∼11s at 48kHz), we downsam- ple by 64 = 26 in the learned transform, and by 23 in the 4 blocks before the first attention block, hence the context length of the first attention blocks will be in the desired range of 210 = 1024.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This model can generate high quality audio over tens of seconds, possibly more depending on the speed requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In general, a larger set of initial convolutional/resnet blocks (closer to the wave- form) will result in better audio quality, at the cost of generation speed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' We found that the architecture is able to generalize to longer sam- ples than it was trained on, if attention blocks are used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' The samples maintain good long-context awareness even when doubling or more the training length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Note that this increases the attention context size and hence needs to be considered for before training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 text-conditional diffusion 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Motivation We used text as a mean of conditioning for several reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In Imagen [15] it has been shown that pretrained and frozen language models can be successfully applied to condition the diffusion process to gen- erate images matching a textual description,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=" and that by increasing Clo22-9ff6uf1ou2=[1' {' {' {' {'{] o = ( :uds Isoitibbs giwollo fOUIu& MIfJ L2 gUg CEC c9U p6 692JJA 9qq6q fo f6 Joq6I MIf fJ6 ibo x9T grribb9dm9 b9r169l b9xil s 1o 1ovs mi ," metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="29mrit 9dt o oo1 1166 niggUc6 [+]: DLIu& fLgUI' f6 f6xf Gp6qgu& 12 qiobb6g IO JUCLG926 f6 2LG&ty Ot f6 fGxf COUqfOUJu& M6 gbA cJs22IGL- DibuA nibbedma Ewpeqqtua x9T JI2f MIfJ 2bgC62 9Uq fJ6 OfJ61 2o0 Ot fJ6 fIUU62 M6 26 COJUIg2 fO t it ." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="0 ilidd 1opn2f' M6 2nt6 f6 J!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="2f ot J6fggfg gug qiob 6gc 6j66Uf Mif g JUOUJA 2FSLf (I ot ) OL GUg (V ot ): LO JSKG FJG COUqIFIOUJU& JUO16 fLgIUGg OU gUg Jom JSU fofSJ CJHUK2 fJG 2ou& 12 JSgG Ot (6'&: I ot +) d w 2JC &GUGISfIOU' MG fISJU OU JGTSgSg' JUCJg!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='U& FJG fIfJG Ot fUG 2OU& m ron .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='Igbom oiauib gt noitibro ot bgau zi oidw gribbgdm9 L2 fL9U21OLJ6 6UCOq61 f 6UCO6 f6 f6Xf9J L6b1626UtSFIOU JfO bii be o a w[l iwollo V-U t o oil boNtgMS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=". J9KIU& fJ6 IU6L19C6 JOL6 &6U61IC Ug 692A fO I26' (---) i o o UgfCJIu&: LJI2 JIUf2 fO f6 1gCf fgf g 2JIUIJg1 J6fJog J1&f 9J2O MOLK Fu6 2s6 Ot fG J9u&ng&6 Joql MI 162nJf JU gU JubiO6g 6xf-Ig&6 58 WODEF2 F- s io qlgd gdt diw bvloa llsuau i tsdt 2TT i mldorq ommo i idT ." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='igbo o s t ld o i d ilep ois b tiw 2brow wgf s 9ldmumr ot 9lds 2i Igbom 9dt tsdt bruo ud .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' (2TT) 86U61JC MO1q2 fgf 916 1onUq JU fIfJ62: M6 9J20 fLJ6g f6xf-fO-2b66CJ f6Xfn9J q62CLbIOU 2b6CI9JI 2Ju& FJ6 &6I6 Ot f6 2Ou& OL JUO16 gd iw oibus otsm ot Iow iow ot rinoitibron xg briuof 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3Eofom .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='=J62-pnibb9dm9 Unw-2f6b2=20" [x9 m" ]=x Uo126\' 2gwbr6 = woq6\'29wbf6( # 29wbua ro22 = woq6(x* f6xf=[a f6xf、]* 6wpeqqiua-wg2-blopg=0\'1) # ga wpeqq-wg-a= .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='9=p_pnbb9dm_92 =_ i o Ix : nust WOT26 23 IEX-COMDIIIOMV DIEE2IOM sd30 models 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4 diffusion auto-encoders with latent diffusion 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Motivation Patching, STFT, and learned transforms can be used to reduce the in- put length during the diffusion process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Those approaches are advan- tageous if we want to train a single model end-to-end, however, this is suboptimal since the waveform is expanded to its original full-length shape multiple times during sampling, slowing down the process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' A more appropriate way would be to first encode the waveform, then do the diffusion loop in the compressed representation, never expanding it to the full waveform until the end of the loop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' This is the idea proposed in [12] (latent diffusion), where a variational autoencoder is first used to compress images by a few factors to a smaller latent space, and later diffusion is applied to that latent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' By compressing the audio before applying diffusion, we can drastically speed up the diffusion sampling procedure, making an important case for an efficient and good quality autoencoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Method There are different ways to implement the autoencoder, however an important property is that we must be able to apply the diffusion pro- cess to its latent space, hence some sort of normalization is required to make sure the values are in the range [−1, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Furthermore, the autoencoder should compress as much as possible without a signifi- cant loss in quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' The smaller the latent, the faster will be the inner diffusion model to process and generate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' We experimented with different autoencoders, and found that di- rectly compressing the waveform can only provide around 2x-4x com- pression without a significant loss in quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' On the other hand, as we have discussed in the representation section, compressing magnitude or mel spectrograms can provide much higher compression rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' The downside is that the spectrogram requires a model (vocoder) to recon- struct the original waveform, even from a non-compressed state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In this work, we propose to use a magnitude diffusion autoencoder, an encoder (ME1d) first encodes the waveform into a magnitude spec- trogram which is then encoded into a latent compressed 64x com- pared to the original waveform, and later uses a diffusion model to re- construct the waveform conditioned on the latent, acting both as a de- terministic compressing encoder and a diffusion vocoder at the same time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In order to make sure the latent space is normalized, we use a tanh function on the bottleneck.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Since the decoding/vocoding pro- cess is a diffusion model, the waveform can be quickly reconstructed from the latent by using a small step count, if instead a more accu- rate reconstruction is desired a higher step count is required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' To make .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="gribrid oibus-txgt boo 2sd brs I69J-fIU6 &6U619fOU 2b66q OU g 2IU&J6 CLn' gUg J91&6 COUf6Xf J6U&f' 2·4·3 q6COq61 fO 6xbsUg F6 I6bL626UfSIOU pSCk fO MgA61OLUU: q!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2O: 2!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="UC6 f6 I6b626fO 12 gfCJJ COb6226' M6 fO &6U61gf6 f6 JSf6Uf MIf f6Xf COUqIfIOUI U f6 2fAJ6 Ot JSf6Uf Lo &f f6 !" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='UgJ Joq6J M6 gbbja g cgecsg& g!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='tn21ou &eUeifo I6UOA6 ff6UFOU PJOCK2 9Ug 26 COUAOJTIFOUSJ-OUJA 9ICJIf6CfJI6 2L6 f6 q!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="I21OU gIfOGUCOqGI I2 &GUGLIC fO U MSA61OL JGU&' M6 E&nLG I: D!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2JOU gTfOGUCOGI JUIGLGUCG oibuA EI&ILG I3: D!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='LEIOU STfOGUCOGL fISJUU& oibuA IUI TMHTAI HTIW HO-OTUA OIUI 31 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='Igbom roieulib grdt lo tuqri 2u& JUf6IboJSfOU gUg gbb6Ug!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='u& fGJU g2 ggg!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='fIOUJ coUf6Xxf fo f6 boNtM S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=". oib 1 g 26coUqg b2gJbJG Joq6T (s) f JUCL6g26 fU6 2gbJ6 Igf6 O1 6X- g JOM 2gbj6 Igf6 ggIO MIf g bLJgia JUog6J gUg JgfGl bagJbJ6 ie el e () 2gbJI& obGifO: nbegbjG2 c9u p6 26 lOL gG1GUt bnbo262: gnfO6UCo612' MJ6L6 fJ6 6UCoq!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="u& 1UCfIOU J2 X6q fO p6 fJ6 qOMU- 1L2' D!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="tt21OU bagbj612 c9U p6 266 g2 g 2b6cC cg26 Ot qitt21OU JU& GIO2' LG JUOq6I M6 bIobO26' OMGAGI MOIKe qIIGCIA U MSA6- fO 6LO fG fob JgJt Ot fG &LIg (O1 IIg&6) 2SLfIU& gf 2OG l1GdnGUcA Ot 2b6cfLo&Lg2' goMU2gbJ!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='& g MgA61OL cOLL62boUg2 fo 26ffIu bIOAIg6g MSA61OLI (6&: ILOU 3KH fO 8KH) EIOU F6 b612b6CfIA6 MA LI&G I2: IMO-2g&G gIt2JOu &GUGiSfOL MIf geJOU gGCogG1 00 92i0M ibu Ewpeqtue Ewpeqqtua 3s WODE2 MSAG1LOU2: VffGUfIOU PJOCK gU JgL&GI COUfGXf JGU&f2 C9U JGJb b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' COMIUf OL JSAGL2) OT fJ6 JUIfI9J COUAOJIFIOU9J PJOCK2 JU f6 -V6f COL- 161 JOgJ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="f 29J fO OFG1 JO6J2' UC162I F6 26 (CUJ etx: M6 tonug nbegiubjG12 fo Gxc6j Ou 2b66cu stg' ga If,2 JkGja gu 69e- cu &ef AGiA 8oog 16enJfe pa begb!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='u& guamG16 pefMGG IQx gug Debeuqi& oU f6 cobj6xif ot f6 qgtg26t qittn2jou bagbj612 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3E00mfo f6 I6COU2fLICFIOU 6eb6CI9JIA It b2IbI& 1O AG1 JOM 2gJbJ6 qlgr ot gorsbig Isroitibbs ae bgbivorq gd e gribbadmg rs 1o E&L6 I: DI2O bL GGC6 DoMU29wbr6q 36nb2gwbf6q .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="li t fu6 r8y 2suubj6 Isf6 JGu&ry 9ug 26 fs 29bjiu bloc622 fo 16cou- DHILI& IUIGLGUC6' M6 IUfGLbOJ9f6 fJ6 JOM 2gJbJ6 CJ9UUGJ2 fO JSfCJ ." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='b9 JgfCJ62 f6 Ontbnf Jr&j 29bJ6 CJ9UU6J2 9Ug J6UC6 c9U p6 bLob6IJA o EI&6 I: DI2OU beb6L fgIUI& b9Jqm62nwod oibuA 2 DIEEIOM LVLE 33 1 rrb2:\\/&frpcoAIDIB&ACM EI&nI6 IO: D!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='LI2JOU AOCOqGI IUIGGUC6 W6 2b6cfLogw EI&nL6 I8: D!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="LeJOU AcoqGL FIJUJ& 92io CouAILgubo26J orbuA J9M J6J M6 2tgCK fJ6 gqqIfIOU9J cJ9UJ6J2 OU f6 IUbf CJgUU6J2 Ot fJ6 N-V16f' COUAOJfIOU pgCK fO Ife MSAGIOL 2JsbG: 2IIJSIJa fO fG begbJG LG q!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='IL2IOU ACOqGI I2 fIUGg pA IL2f COUAGLFI& FG MSAGIOLIU fO boNtoM s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='.2 gICIf6CfL6 MIf 9JO2f UO CS&6 IUfO JI&-dngJ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="fA UIC AOCog6I fIou' M6 brobo26 g 2bj6 gggbrsfIou fgf 9JJOMe fo fhlU Onl n-M6f dSJIfA 8KHS I2IC AOCoqi& g16 2I JCKIU& IU f6 tOJOMI& 26C- g i - do-- f J6i& pg26g ocoq612: ig6g Aocoq612 c ioqc6 A61 &oog f6Ug fo b1ognC6 9Lfl1gCf2' JU9KIu& fJ6 c926 1O cOUOUJA 26g g66b- i-i 2 do bo viti o ." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content="e Isivi on i iof GAGr biobeuJa fhUI& g 2b6ctlo&igJ pgck fo g bjgagpJ6 gqi MgaG- C6IA6' J9KI& fJ6U gU Ig69J I6bL626UffOU 1O gHgO &6U6L9fOU: HOM- oitoitoM I." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 34 WODE25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='7 training info 35 In order to flatten the spectrogram, we have to match the configu- ration of the STFT used to obtain the spectrogram, with the configu- ration of the 1d transposed convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' The key insight is that the STFT operation can be viewed as a 1D convolution with large ker- nel sizes (or window size) of sine and cosine waves, which is then merged in-place using the absolute value, and later mel-scaled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' The mel-scaling doesn’t alter the temporal positioning, only the frequency (or channels) of the spectrogram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Hence, if we set large kernel sizes equivalent to the STFT window length, strides equivalent to the STFT hop-length, and proper padding, the transposed convolution will fo- cus on the same context region of the waveform used to obtain the spectrogram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Similarly, we will set the input channels of the trans- posed convolution to match the number of channels used for the mel- spectrogram, and the output channels to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Stereo audio is decoded by batching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' We used a window-length/kernel-size of 1024 and hop- length/stride of 256, similarly to popular vocoders we used 80 mel- spectrogram channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' With this configuration, the spectrogram has a default 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2x compression factor over the initial waveform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='3 Evaluation This model can produce high quality waveform, as with other mod- els, a good reconstruction of high-frequencies requires more convolu- tional blocks towards the start of the U-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Moreover, we hypothe- size that increasing the number of mel-channels might increase qual- ity for two reasons: first, mel-spectrogram would compress less infor- mation out of the initial waveform, and second, the transposed con- volution would have more channels to flatten the spectrogram and hence more capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='7 training info 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='1 Data We trained all of our models on a 2500h mix of audio at 48kHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In the text-based model, we used metadata such as title, genre, album and artist as conditioning information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' For the autoencoder, upsam- pler, vocoder, we trained on random crops of length 218 (∼5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='5s at 48kHz).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' For the long-context text-conditional audio generation model, we trained on fixed crops of length 221 (∼44s at 48kHz), using the crop index as additional conditioning information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='2 Training We trained all of our models with AdamW, using a learning rate of 10−4, β1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='95, β2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='999, ǫ = 10−6, and wight decay of 10−3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' For 36 models all models, we used an exponential moving average with β = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='995 and power of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' We trained all models for around 1M steps with a batch size of 32, this takes approximately 1 week on a single A100 GPU for the largest, text-conditional model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 6 F U T U R E W O R K While our models can have a good generation quality on short few- second segments, or a good structure with longer segments, training an efficient model with both high quality and long context remains an open problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' A few promising future modelling approaches that need more experimentation include: (1) train diffusion models using perceptual losses on the waveforms instead of L2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' this might help to decrease the initial size of the U-Net,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' as we wouldn’t have to pro- cess non-percieveable sounds,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' (2) stack multiple upsamplers to gen- erate a song top-down from low-sample rates to high sample rates,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' (3) improve the quality of the diffusion autoencoder by using mel- spectrograms instead of magnitude spectrograms as input,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' (4) other types of conditioning which are not text-based might be useful to nav- igate the audio latent space,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' which is often hard to describe in words DreamBooth-like models [14] could be used to assign symbols to sounds,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' (5) compress mel-spectrograms to a quantized representation with diffusion autoencoders to allow for high compression ratios and later train an autoregressive transformer on top of that.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Other simpler improvements on the current models include: (1) in- crease the training data from 2k hours to 60k-100k hours, (2) use more sophisticated diffusion samplers to get higher quality for the same number of sampling steps, (3) for text-based models, use larger pretrained language to obtain embeddings, which has been shown to be very important for quality in [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 37 7 C O N C L U S I O N Generating high-quality audio efficiently is a challenging task as it in- volves the generation of numerous values to accurately represent the sound waves, especially when aiming for high-fidelity stereo sound at a sample rate of 48kHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' In this work, we proposed different methods and models to generate high quality audio from a textual descrip- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' From models targeting long-context audio with an emphasis on structure, short-context with an emphasis on quality, to other useful models such as the diffusion upsampler and vocoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' We introduced a new method that utilizes text-conditional diffusion models based on 1D U-Nets, allowing for the generation of multiple minutes of 48kHz audio on a single consumer GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Furthermore, we have provided a collection of open-source libraries to streamline future research, in- cluding potential improvements in audio autoencoders and diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 39 B I B L I O G R A P H Y [1] Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Olivier Teboul, David Grangier, Marco Tagliasacchi, and Neil Zeghidour.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' AudioLM: a Language Modeling Approach to Audio Generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv:2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 03143.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' [2] Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, and Ilya Sutskever.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Jukebox: A Generative Model for Music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv:2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='00341.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' [3] Jonathan Ho, Ajay Jain, and Pieter Abbeel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' “Denoising diffu- sion probabilistic models.” In: Advances in Neural Information Processing Systems 33 (Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2020), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 6840–6851.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' [4] Jonathan Ho and Tim Salimans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Classifier-Free Diffusion Guid- ance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='12598.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' [5] Diederik P Kingma and Max Welling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Auto-Encoding Variational Bayes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv:1312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='6114.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' [6] Troy Luhman and Eric Luhman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Improving Diffusion Model Effi- ciency Through Patching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='04316.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' [7] Ozan Oktay et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Attention U-Net: Learning Where to Look for the Pancreas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv:1804.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='03999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' [8] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Si- monyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' WaveNet: A Generative Model for Raw Audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv:1609.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='03499.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' [9] Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Neural Discrete Representation Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv:1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 00937.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' [10] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sha- ran Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Exploring the Limits of Transfer Learning with a Unified Text- to-Text Transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv:1910.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='10683.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' [11] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Hierarchical Text-Conditional Image Generation with CLIP Latents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv:2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='06125.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' [12] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' High-Resolution Image Synthesis with Latent Diffusion Models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='10752.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' [13] Olaf Ronneberger, Philipp Fischer, and Thomas Brox.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' U-Net: Convolutional Networks for Biomedical Image Segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv:1505.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='04597.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 41 42 bibliography [14] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' DreamBooth: Fine Tuning Text-to- Image Diffusion Models for Subject-Driven Generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='12242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' [15] Chitwan Saharia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Photorealistic Text-to-Image Diffusion Mod- els with Deep Language Understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 11487.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' [16] Tim Salimans and Jonathan Ho.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Progressive Distillation for Fast Sampling of Diffusion Models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='00512.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' [17] Jascha Sohl-Dickstein, Eric A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Weiss, Niru Maheswaranathan, and Surya Ganguli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Deep Unsupervised Learning using Nonequi- librium Thermodynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv:1503.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='03585.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' [18] Jiaming Song, Chenlin Meng, and Stefano Ermon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Denoising Dif- fusion Implicit Models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content='02502.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' [19] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Gomez, Lukasz Kaiser, and Illia Polo- sukhin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' Attention Is All You Need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' eprint: arXiv : 1706 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} +page_content=' 03762.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdFQT4oBgHgl3EQfMTYr/content/2301.13267v1.pdf'} diff --git a/CdFAT4oBgHgl3EQftB4M/vector_store/index.pkl b/CdFAT4oBgHgl3EQftB4M/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..c3ea91f28181db4b12f10ada56baa82661152cc1 --- /dev/null +++ b/CdFAT4oBgHgl3EQftB4M/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6bdba68cccb191522643532c09923483c692dd2dc033f29bad3d7da940ab4e9c +size 152637 diff --git a/CtE1T4oBgHgl3EQf9wbT/vector_store/index.pkl b/CtE1T4oBgHgl3EQf9wbT/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..f536afccf0651c1de10d8042baed6f93a17656f3 --- /dev/null +++ b/CtE1T4oBgHgl3EQf9wbT/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:955cce9f369460f2443c386e186e8e81551181abce0d58d7a4721fce2ca3a030 +size 261912 diff --git a/GNE1T4oBgHgl3EQf_AZD/content/tmp_files/2301.03575v1.pdf.txt b/GNE1T4oBgHgl3EQf_AZD/content/tmp_files/2301.03575v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..ba603498fc09d58d5f673ea53156f452feda3f6d --- /dev/null +++ b/GNE1T4oBgHgl3EQf_AZD/content/tmp_files/2301.03575v1.pdf.txt @@ -0,0 +1,3147 @@ +1 +This work has been submitted to the IEEE for possible +publication. Copyright may be transferred without notice, after +which this version may no longer be accessible. +arXiv:2301.03575v1 [cs.IT] 9 Jan 2023 + +On the Coexistence of eMBB and URLLC in +Multi-cell Massive MIMO +Giovanni Interdonato, Member, IEEE, Stefano Buzzi, Senior Member, IEEE, Carmen D’Andrea, Member, IEEE, +Luca Venturino, Senior Member, IEEE, Ciro D’Elia, and Paolo Vendittelli +Abstract—The non-orthogonal coexistence between the en- +hanced mobile broadband (eMBB) and the ultra-reliable low- +latency communication (URLLC) in the downlink of a multi- +cell massive MIMO system is rigorously analyzed in this work. +We provide a unified information-theoretic framework blending +an infinite-blocklength analysis of the eMBB spectral efficiency +(SE) in the ergodic regime with a finite-blocklength analysis of +the URLLC error probability relying on the use of mismatched +decoding, and of the so-called saddlepoint approximation. Punc- +turing (PUNC) and superposition coding (SPC) are considered +as alternative downlink coexistence strategies to deal with the +inter-service interference, under the assumption of only statistical +channel state information (CSI) knowledge at the users. eMBB +and URLLC performances are then evaluated over different +precoding techniques and power control schemes, by accounting +for imperfect CSI knowledge at the base stations, pilot-based +estimation overhead, pilot contamination, spatially correlated +channels, the structure of the radio frame, and the characteristics +of the URLLC activation pattern. Simulation results reveal +that SPC is, in many operating regimes, superior to PUNC +in providing higher SE for the eMBB yet achieving the target +reliability for the URLLC with high probability. Moreover, PUNC +might cause eMBB service outage in presence of high URLLC +traffic loads. However, PUNC turns to be necessary to preserve +the URLLC performance in scenarios where the multi-user +interference cannot be satisfactorily alleviated. +Index Terms—Enhanced Mobile Broadband, Error Probabil- +ity, Massive MIMO, Mismatched Decoding, Network Availability, +Non-Orthogonal Multiple Access, Puncturing, Saddlepoint Ap- +proximation, Spectral Efficiency, Superposition Coding, Ultra- +Reliable Low-Latency Communications. +I. INTRODUCTION +W +ITH the advent of the mobile application ecosystem +and the resulting increase of the data-processing and +storage capabilities of the smart devices, several heterogeneous +services have emerged setting various stringent communication +requirements in terms of data rates, latency, reliability and +massive connectivity. These requirements and related use cases +have been summarized by the 3rd Generation Partnership +Project (3GPP) into three macro services, namely enhanced +This work was supported by the Ministero delle Imprese e del Made in +Italy (former MISE) within the project “Smart Urban Mobility Management” +(5G-SUMMA), Asse II, Supporto alle Tecnologie Emergenti. +G. Interdonato, S. Buzzi, C. D’Andrea, L. Venturino and C. D’Elia are +with the Department of Electrical and Information Engineering, University of +Cassino and Southern Latium, 03043 Cassino, Italy. They are also affiliated +with Consorzio Nazionale Interuniversitario per le Telecomunicazioni (CNIT), +43124 Parma, Italy. P. Vendittelli is with TIM S.p.A., 20133 Milan, Italy. S. +Buzzi is also affiliated with Politecnico di Milano, 20133 Milan, Italy. +Corresponding author: Giovanni Interdonato. +mobile broadband (eMBB), ultra-reliable low-latency com- +munications (URLLC) and massive machine-type communica- +tions (mMTC) [1]. eMBB services require high-peak data-rate +and stable connectivity, and include most of the everyday us- +age applications: entertainment, multimedia, communication, +collaboration, mapping, web-surfing, etc. URLLC services +require an one-way radio latency of 1 ms with 99.999% +success probability, and include real-time and time-critical +applications, such as autonomous driving, automation control, +augmented reality, video and image processing, etc. mMTC +services enable connectivity between a vast number of miscel- +laneous devices, and include applications such as smart grids, +traffic management systems, environmental monitoring, etc. +5G started to roll out variously as an eMBB service, +essentially like a faster version of LTE, whereas mMTC +and URLLC requirements continue to be refined and will +materialize within the next decade, although some experimen- +tal activities are already taking place in many parts of the +world1. Academic research and industrial standardization is +currently interested at different coexistence mechanisms for +such heterogeneous services, apparently moving apart from +the initial vision of a sliced network [2]. Slicing the network +basically means allocating orthogonal resources (storage, com- +puting, radio communications, etc.) to heterogeneous services +so that to guarantee their mutual isolation. This approach +is, in broad sense, generally known as orthogonal multiple +access (OMA). As an interesting alternative to orthogonal +resource allocation, non-orthogonal OMA (NOMA) is gaining +increasing importance especially with respect to the allocation +of the radio access network (RAN) communication resources. +The conventional approach to slice the RAN is to separate +eMBB, mMTC, and URLLC services in time and/or frequency +domains, whereas NOMA relies on efficient coexistence strate- +gies wherein heterogeneous services share the same time- +frequency resources, being separated in the power and spatial +domain. In this regard, the terminology Heterogeneous OMA +(H-OMA) is often adopted [2] to distinguish the orthogonal +resource allocation of heterogeneous services from that of the +same type, referred to as OMA. (The same distinction applies +to H-NOMA with respect to NOMA.) +Massive MIMO [3]–[5] is a technology that uses a very +large number of co-located antennas at the base stations (BSs) +to coherently and simultaneously serve multiple users over +1See, e.g., the funding programs from the Italian former Ministry of +Economic Development, as well as those of other European Countries, the +EU, USA, China and Japan. + +ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO +3 +the same radio resources. The users are multiplexed in the +spatial domain by using beamforming techniques that enable +high-directivity transmission and reception. The use of many +antennas also triggers the favorable propagation which further +reduces the multi-user interference and the channel hardening +which reduces the random fluctuations of the effective channel +gain. As a consequence, there is no need to adopt intri- +cate signal processing techniques to deal with the multi-user +interference. Such an aggressive spatial multiplexing along +with the intrinsic practicality and scalability of the massive +MIMO technology leads to high levels of energy and spectral +efficiency, spatial diversity, link reliability and connectivity. +The primary focus of the massive MIMO research has +been on increasing the user data rates, thereby targeting the +eMBB requirements. Lately, some studies have highlighted the +significant benefits that massive MIMO is able to provide to +URLLC [6]–[8] by reducing the outage and error probability, +and therefore increasing the link reliability. Higher reliability +results to less retransmissions which, in turn, translates to +a lower latency. mMTC also benefits from massive MIMO +technology [7], [9] by capitalizing on the high energy effi- +ciency to increase devices’ battery lifetime. Besides, favorable +propagation enables an aggressive spatial multiplexing of the +mMTC devices, facilitating the detection and the random +access procedures. +A. RELATED WORKS +Coexistence between heterogeneous services has been ini- +tially studied in systems wherein a single-antenna BS serves +multiple heterogeneous users. In [2], Popovski et al. proposed +a first tractable communication-theoretic model that captures +the key features of eMBB, URLLC and mMTC traffic. (These +features are summarized in Table I.) Specifically, [2] analyzes +two scenarios for a single-cell model: (i) slicing for URLLC +and eMBB, and (ii) slicing for mMTC and eMBB. The +downlink multiplexing of URLLC and eMBB is studied in [10] +with the goal of maximizing the utility of the eMBB traffic +while satisfying the quality of service requirements of the +URLLC traffic, and by abstracting the operation at the physical +layer. Coexistence mechanisms between URLLC and eMBB +traffic, based on the puncturing technique, have been proposed +in [11] for the uplink of a multi-cell network wherein a +simplified Wyner channel model with no fading was assumed. +As for multi-user MIMO systems, in [12] a null-space-based +spatial preemptive scheduler for joint URLLC and eMBB +traffic is proposed for cross-objective optimization, where the +critical URLLC quality-of-service (QoS) is guaranteed while +maximizing the eMBB ergodic capacity. The spatial degrees of +freedom at the BS are leveraged to fulfill the URLLC decoding +requirements without jeopardizing the performance of the +eMBB users. A similar study but for a distributed setup was +conducted in [13] where a joint user association and resource +allocation problem is formulated for the downlink of a fog +network, considering the coexistence of URLLC and eMBB +services for internet-of-things (IoT) applications. An analytic +hierarchy process was proposed for setting the priorities of the +services and to formulate a two-sided matching game where a +stable association between the fog network infrastructure and +IoT devices is established. +The coexistence between eMBB and URLLC is of most +interest [14]–[18], and is mainly handled with three alternative +techniques, herein listed in descending order of complexity: +• successive interference cancellation (SIC), with which +the receiver iteratively decode and remove the contribu- +tions of a specific service from the cumulative received +signal. This approach requires that the receiver has access +to the channel state information (CSI) to be able to +perform the multi-stage decoding, with decreasing lev- +els of interference, to the required successful decoding +probability. +• puncturing (PUNC), consisting in preventing the inter- +service interference. In the downlink, whenever the trans- +mitter has to transmit a URLLC signal, then the eMBB +signals are dropped over the channel uses involved by th +URLLC transmission. In the uplink, the receiver uses an +erasure decoder to discard the eMBB signals, provided +that it is able to detect the presence of URLLC transmis- +sions, e.g., via energy detection. +• superposition coding (SPC), with which the transmitter +simply sends a linear combination of eMBB and URLLC +signals. At the receiver, both for the uplink and the +downlink, the inter-service interference is treated as un- +correlated noise (TIN). Again, this approach requires the +receiver to be able to detect the presence of the undesired +transmissions. +In [14] the coexistence of URLLC and eMBB services in the +uplink of a C-RAN architecture with shared analog fronthaul +links is analyzed, accounting for SIC, puncturing, and TIN. +This work provides an information-theoretic study in the +performance of URLLC and eMBB traffic under both H-OMA +and H-NOMA, by considering standard cellular models with +additive Gaussian noise links and a finite inter-cell interfer- +ence. The main conclusions are that NOMA achieves higher +eMBB rates with respect to H-OMA, while guaranteeing +reliable low-rate URLLC communication with minimal access +latency. Moreover, H-NOMA under SIC is seen to achieve +the best performance, while, unlike the case with digital +capacity-constrained fronthaul links, TIN always outperforms +puncturing. A similar analysis is conducted in [11] including +both uplink and downlink of C-RAN without analog fronthaul +but considering practical aspects, such as fading, the lack +of CSI for URLLC transmitters, rate adaptation for eMBB +transmitters and finite fronthaul capacity. Abreu et al. in [16] +analyzes both the H-OMA and H-NOMA options for eMBB +traffic, and grant-free URLLC in the uplink accounting for +minimum mean square error (MMSE) receivers with and +without SIC, and under the assumption of Rayleigh fading +channels. The resulting outage probability and achievable +rates show that TIN is mostly beneficial in sufficiently high- +SNR regime when SIC is employed or, in some cases, with +low URLLC load. Otherwise, H-OMA supports higher loads +for both services simultaneously. Recently, [17] proposed an +approach to improve the supported loads for URLLC in the +uplink, for both H-OMA and H-NOMA in presence of eMBB + +ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO +4 +TABLE I +FEATURES OF THE 5G USE CASES +eMBB +URLLC +mMTC +characteristics +high rate, moderate reliability +low latency, ultra reliability, low rate +low rate, large connectivity +traffic +large payload, several devices +small payload, few devices +small payload, massive devices +activation pattern +stable +intermittent +intermittent +time span +long, multiple resources +short, slot +long, multiple resources +frequency span +single/multiple resources +multiple resources, diversity +single resource +scheduling +to prevent access collision +for high reliability +infeaseable +random access +if needed +to support intermittency +fundamental +target +maximize data rate +meet latency and reliability requirements +maximize supported arrival rate +reliability requirement +∼ 10−3 +∼ 10−5 +∼ 10−1 +applications +video streaming, augmented reality, +entertainment +connected factories, traffic safety, au- +tonomous vehicles, telemedicine +internet of things, low-power sensors, +smart cities +traffic, showing the superiority of H-NOMA in ensuring the +reliability requirements of both the services. A similar analysis +but for the downlink is conducted in [18], [19] where optimal +resource allocation strategies and H-NOMA are combined to +satisfy the eMBB and URLLC QoS constraints, under the +assumption of perfect eMBB CSI and statistical URLLC CSI +knowledge. +The information-theoretic framework used by the afore- +mentioned works to characterize the performance achieved by +eMBB and URLLC users cannot be applied to massive MIMO +scenarios, for different reasons. Establishing the rate (or the +spectral efficiency) of the eMBB users in the ergodic (infinite- +blocklength) regime, upon the block-fading channel model, is +sound as the eMBB codewords spans an infinite number of +independent fading realizations. Nevertheless, as per the per- +formance of the URLLC users in a quasi-static fading scenario, +the use of the outage capacity, whose analysis includes infinite- +blocklength assumptions, leads to an inaccurate evaluation +of the error probability, as demonstrated in [8]. In addition, +outage capacity analyses do not capture the effects of the +CSI acquisition overhead when pilots are used to estimate the +uplink channel. As an alternative, finite-blocklength analyses +have been proposed for URLLC in conventional cellular +networks [18], [19], co-located massive MIMO networks [20], +[21] and cell-free massive MIMO networks [22], and rely on +the information-theoretic bounds and tools developed in [23], +e.g., the well known normal approximation. However, the +work in [8] proved that the normal approximation is not +accurate in the region of low error probabilities of interest +in URLLC (<10−4), especially as the number of antennas at +the BS increases, and in presence of imperfect CSI. Impor- +tantly, ¨Ostman et al. in [8] provided a more rigorous finite- +blocklength information-theoretic framework relying on the +use of a mismatched decoding [24], and of the saddlepoint +approximation [25] for evaluating the error probability of the +URLLC users in co-located massive MIMO systems. This +framework, priory developed for wireless fading channels +in [26]–[28], accounts for linear signal processing, imperfect +CSI and instantaneous channel estimation error, and additive +uncorrelated noise including multi-user interference. However, +the analysis of [8] is limited to the URLLC regime, and the +coexistence with the eMBB is yet to be investigated under a +unified information-theoretic framework. +B. CONTRIBUTIONS +Our contributions can be summarized as follows. +• We investigate the non-orthogonal multiplexing of the +eMBB and the URLLC, in the downlink of a multi- +cell +massive +MIMO +system, +by +providing +a +uni- +fied information-theoretic framework that combines an +infinite-blocklength analysis to assess the SE of the +eMBB and a finite-blocklength analysis to assess the error +probability of the URLLC. +• Unlike prior works wherein the URLLC performance +is inappropriately evaluated by the use of the outage +capacity analysis or the error probability obtained via the +normal approximation, in this work the finite-blocklength +information-theoretic analysis relies on the results and +tools established in [8], where mismatched receivers +and saddlepoint approximation are assumed, but the +coexistence between and URLLC and eMBB was not +investigated. +• The proposed unified framework accommodates two al- +ternative coexistence strategies: PUNC and SPC. The +former prevents the inter-service interference to protect +the URLLC reliability, whereas the latter accepts it to +maintain the eMBB service. In addition, the analytical +framework accounts for imperfect CSI acquisition at the +BSs via uplink pilot transmissions, pilot contamination +and pilot overhead, spatially correlated channels and the +lack of CSI at the users. +• We numerically evaluate the performance achieved by +PUNC and SPC under different precoding schemes, +namely maximum ratio, regularized zero-forcing and +multi-cell MMSE, and different power allocation strate- +gies, i.e., equal power allocation, weighted fractional +power allocation and optimal power allocation maxi- +mizing the product SINR throughout the network. The +coexistence between eMBB and URLLC is explored +in various scenarios, including different configurations +of the time-division duplex radio frame, and different +URLLC random activation patterns. +• The results of our comprehensive simulation campaign +highlight the clear superiority of SPC over PUNC in +most of the considered operating regimes. The main +limitation of SPC, namely the caused multi-user inter- +ference, is often overcome by using regularized zero- +forcing and multi-cell MMSE, which in turn hinge on a + +ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO +5 +high-quality CSI acquisition. Whenever these precoding +techniques cannot be implemented due to complexity or +hardware constraints, the URLLC reliability requirements +can be met by fine-tuning the parameters of the pro- +posed weighted fractional power allocation. Conversely, +performing PUNC is necessary to preserve the URLLC +performance if the interference cancellation via precoding +is ineffective, for instance, when pilot contamination is +high or the multi-user interference is excessive. +• Pilot contamination among URLLC users is particularly +destructive. This led us to devise a pilot assignment policy +that prioritizes the URLLC users. In our approach, we +primarily assign unique orthogonal pilots to the URLLC +users, admitting pilot reuse only among eMBB users. +If doable, orthogonal pilots are assigned within cells +to prevent the intra-cell pilot contamination, and if the +uplink training length is sufficiently large, then mutually +orthogonal pilots are guaranteed to everyone. +C. PAPER OUTLINE +The remainder of this paper is organized as follows. In Sec- +tion II, we introduce the system model of the multi-cell +massive MIMO system, including the description of the uplink +training and a unified framework for the data transmission +stage accounting for both puncturing and superposition cod- +ing techniques. In Section III we present the information- +theoretic analyses in the infinite-blocklength regime and finite- +blocklength regime for the eMBB and the URLLC perfor- +mance evaluation, respectively. Section IV details the precod- +ing techniques and power allocation strategies to deal with the +coexistence of eMBB and URLLC users. Simulation results +and discussions are provided in Section V, while the main +findings of this work are discussed in Section VI. +D. NOTATION +Vectors and matrices are denoted by boldface lowercase and +boldface uppercase letters, respectively. Calligraphy uppercase +letters denote sets, while C and R represent the sets of complex +and real numbers, respectively. E {·} indicates the expectation +operator, while Pr {·} denotes the probability of a set. x+ +represents the positive part function, namely x+ =max{x, 0}, +and ⌊·⌋ denotes the floor function. The natural logarithm +is indicated by log(·) and Q(·) describes the Gaussian Q- +function. CN (µ, Σ) describes a circularly symmetric complex +Gaussian distribution with mean µ and covariance matrix Σ. +The superscripts (·)T, (·)∗ and (·)H denote the transpose, the +conjugate and the conjugate transpose (Hermitian) operators, +respectively. tr(A) indicates the trace of the matrix A, while +∥a∥ denotes the ℓ2-norm of the vector a. The notation [A]:,i +indicates the ith column of the matrix A. IN represents the +identity matrix of size N×N. Table II introduces the notation +definition used in the system model of this paper. +II. SYSTEM MODEL +Let us consider a multi-cell massive MIMO system with +L cells, each one served by a BS that is placed at the cell- +center and equipped with M co-located antennas. Each cell +TABLE II +SYSTEM MODEL NOTATION +Symbol Description +Symbol Description +L +n. of cells +K +n. of users/cell +M +n. of BS antennas +Ku +n. of URLLC users/cell +α +Ku/K ∈ (0, 1) +Ke +n. of eMBB users/cell +τc +TDD frame length +Ku +j +URLLC users set in cell j +τp +UL training length +Ke +j +eMBB users set in cell j +τd +DL data trans. length +T +n. of slots in a TDD frame +hj +lk +channel between BS j and user k in cell l, vector CM +�hj +lk +estimate of hj +lk +�hj +lk +estimation error hj +lk−�hj +lk +Rj +lk +correl. matrix of hj +lk +βj +lk +average channel gain of �hj +lk +Cj +lk +correl. matrix of �hj +lk +f +pilot reuse factor +pp +jk +UL pilot power +ρmax +j +max transmit power at BS j +Pjk +set of all the users using the same pilot as user k in cell j +At +jk +1 if URLLC user k in cell j is active in slot t, 0 otherwise +au +parameter of the Bernoulli distribution that draws At +jk +ςe +jk[n] +data transmitted by BS j to eMBB user k in channel use n +ςu +ji[n] +data transmitted by BS j to URLLC user i in channel use n +wjk +precoding vector, CM, used by BS j to its user k +σ2 +u +UL noise variance +ρu +ji +DL power to URLLC user i +σ2 +d +DL noise variance +ρe +jk +DL power to eMBB user k +gli +jk +precoded DL channel from BS l using wli to user k in cell j +�gli +jk +estimate of gli +jk +nd +URLLC codeword length +ϵdl +jk +DL error probability +ηdl +DL network availability +ν +exponent characterizing the fractional power allocation (FPA) +ω +FPA weight tuning the power allocated to the URLLC users +covers a square area of D × D km2, and provide service +to K users. It holds that M ≫ K so that interference +suppression can be efficiently carried out by exploiting the +spatial degrees of freedom. A fraction 0≤α≤1 of the K users +requests a URLLC service, e.g., a vehicle in cellular vehicle- +to-everything (C-V2X) use cases for intelligent transportation +systems, or a machine in factory automation use cases for +“Industry 4.0”. Letting Ku = αK be the number of URLLC +users per cell, then Ke = K −Ku is the number of eMBB +users per cell. The set including the indices of the eMBB and +URLLC users in cell j is denoted as Ke +j and Ku +j , respectively. +A. TDD PROTOCOL AND FRAME STRUCTURE +The considered system operates in time-division duplex +(TDD) mode to facilitate CSI acquisition and limit the es- +timation overhead. In addition, we assume that the channel is +reciprocal as a result of a perfect calibration of the RF chains. +By leveraging the channel reciprocity, the channel estimates +acquired by the BS in the uplink are then utilized in the +downlink to design the transmit precoding vectors. As channel +hardening holds for co-located massive MIMO systems with +sufficiently large antenna arrays in most of the propagation +environments, we assume that the users do not estimate the +downlink channels, and reliably decode downlink data solely +relying on the knowledge of the statistical CSI. Hence, the +TDD protocol consists of three phases: (i) pilot-based uplink +training, (ii) uplink data transmission, and (iii) downlink data +transmission. +The time-frequency resources are structured in TDD frames, +each one grouping a set of subcarriers and time samples over +which the channel response is assumed being frequency-flat +and time-invariant. The TDD frame must accommodate the + +ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO +6 +URLLC +eMBB +URLLC +eMBB +PUNC +or += ++ +SPC +one slot, nd +τd +DL Data Transmission +UL training +τp +Coherence +bandwidth +Bc +one channel use +Coherence time, Tc +Fig. 1. An illustration of the TDD frame assuming no uplink data transmission +phase, and representing the resource allocation in case of puncturing (PUNC) +and superposition coding (SPC) operation. +aforementioned protocol phases and supporting all the users, +thus its size is designed to match that of the smallest user’s +coherence block in the network. As shown in Fig. 1, the +TDD frame consists of τc = TcBc samples (or channel uses) +where Tc is the coherence time and Bc is the coherence +bandwidth. τp channel uses out of τc are spent for the uplink +CSI acquisition, whereas the remaining channel uses are +devoted to the uplink and downlink data transmission. Since, +in this paper, we only focus on the downlink operation, we +assume that τd = τc −τp is the length of the downlink data +transmission phase, without loss of generality. The latter is +divided in T slots of equal length. As conventionally assumed +in the ergodic regime, an eMBB transmission spans multiple +(theoretically an infinite number of) TDD frames, wherein +the channel realizations evolve independently according to +the block-fading model. To evaluate the spectral efficiency +achieved by the eMBB users, we look at a single TDD frame +and resort to the information-theoretic bounds and tools in +the infinite-blocklength regime [4], [5]. Whereas, URLLC +transmissions are confined in time to meet the very strict +latency requirements and are allowed to span only one slot. +Hence, the number of channel uses in a slot equals the URLLC +codeword length. We assume a random activation pattern of +the URLLC users. Within a TDD frame, a URLLC user may +be active in multiple slots. To characterize the error probability +of the URLLC transmissions, we look separately at each +single slot of a TDD frame and resort to the finite-blocklength +information-theoretic bounds and tools presented in [8]. +B. CHANNEL MODEL AND UPLINK TRAINING +The channel response between the k-th user in cell l and +the BS in cell j is denoted by the M-dimensional complex- +valued vector hj +lk. We assume correlated Rayleigh fading +channels, that is hj +lk ∼ CN +� +0M, Rj +lk +� +, where Rj +lk ∈ CM×M +is the positive semi-definite spatial correlation matrix. The +corresponding average channel gain (or large-scale fading +coefficient) is given by βj +lk = tr(Rj +lk)/M. Large-scale fading +quantities are assumed to be known at the BS. +In the uplink training phase, each user transmits a pilot +sequence that spans τp channel uses. The pilot sequence of +user k in cell j is denoted by φjk ∈ Cτp. All the pilot +sequences are drawn from a set of τp mutually orthogonal +pilots, thereby the inner product between two pilots equals +either τp if the sequences are identical or 0 if they are mutually +orthogonal. Notice that re-using the pilots throughout the +network might be unavoidable as the share of the TDD frame +reserved to the training is limited and, importantly, as the +CSI acquisition overhead significantly degrades the spectral +efficiency. Pilot reuse gives rise to additional interference, +known as pilot contamination [3], that degrades the quality +of the acquired CSI and correlates the channel estimates. +The cumulative uplink signal received at BS j, denoted by +Yp +j ∈ CM×τp, reads +Yp +j = +K +� +k=1 +� +pp +jkhj +jkφT +jk + +L +� +l=1 +l̸=j +K +� +i=1 +� +pp +lihj +liφT +li + Np +j , +(1) +where pp +jk is the transmit pilot power, and Np +j is the additive +receiver noise with i.i.d. elements distributed as CN +� +0, σ2 +u +� +, +with σ2 +u being the receiver noise variance in the uplink. To +estimate the channel of user k in its own cell, hj +jk, BS j +correlates Yp +j with the known pilot sequence φjk as +yp +jjk =Yp +jφ∗ +jk += +� +pp +jkτphj +jk + +K +� +i=1 +i̸=k +� +pp +jihj +jiφT +jiφ∗ +jk ++ +L +� +l=1 +l̸=j +K +� +i=1 +� +pp +lihj +liφT +liφ∗ +jk + Np +jφ∗ +jk . +(2) +In (2), the second term of the rightmost right-hand side +represents the intra-cell pilot contamination term, while the +third term quantifies the inter-cell pilot contamination. A +conventional pilot allocation strategy consists in assigning +mutually orthogonal pilots to users within the same cell, and +re-using the pilot sequences over different cells [5]. This +is a reasonable choice as intra-cell pilot contamination is +presumably stronger than inter-cell pilot contamination. We +let τp = fK where f is referred to as pilot reuse factor. +Importantly, in order not to jeopardize the ultra-reliability of +the URLLC transmissions, we assume that unique orthogonal +pilot sequences are assigned to all the URLLC users in the +network, if doable (namely when τp >LKe). Summarizing, the +pilot allocation strategy we propose primarily aims to prevent +URLLC users from being affected of pilot contamination, and +secondarily to prevent intra-cell pilot contamination. Finally, +if τp is sufficiently large, that is τp ≥ LK, then mutually +orthogonal pilots can be guaranteed to everyone. Let us define +the set +Pjk = +� +(l, i) : φli =φjk, l=1, . . . , L, i=1, . . . , K +� +, +(3) +including the indices of all the users (and of the corresponding +cells) that use the same pilot as user k in cell j. Hence, we +can rewrite (2) as +yp +jjk = +� +pp +jkτphj +jk + τp +� +(l,i)∈Pjk\(j,k) +� +pp +lihj +li + Np +jφ∗ +jk. +(4) + +ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO +7 +The processed uplink signal, yp +jjk, is a sufficient statistic for +the estimation of hj +jk. Upon the knowledge of the spatial +correlation matrices, BS j can compute the minimum mean- +squared error (MMSE) estimate of hj +jk, denoted by �hj +jk, based +on the observation yp +jjk as [5] +�hj +jk = +� +pp +jkRj +jkΨj +jkyp +jjk +(5) +where +Ψj +jk = +� +� +� +(l,i)∈Pjk +pp +liτpRj +li + σ2 +ulIMj +� +� +−1 +. +(6) +The estimation error is given by �hj +jk = hj +jk − �hj +jk, and has +correlation matrix +Cj +jk =E +� +�hj +jk(�hj +jk)H� += Rj +jk−pp +jkτpRj +jkΨj +jkRj +jk. +It follows that �hj +jk and �hj +jk are independent random variables +distributed as +�hj +jk ∼ CN +� +0M, Cj +jk +� +, +�hj +jk ∼ CN +� +0M, Rj +jk−Cj +jk +� +. +C. DOWNLINK TRANSMISSION +In the downlink transmission phase, each BS transmits +payload data to all the active users of its cell. Let At +jk be +a coefficient that equals 1 if a URLLC transmission takes +place at the t-th slot for URLLC user k in cell j, and +0 otherwise. This coefficient models the random activation +pattern of the URLLC users which follows a Bernoulli dis- +tribution with parameter au, At +jk ∼ Bern(au). To handle the +coexistence of eMBB and URLLC users in the downlink, we +consider two transmission techniques: (i) puncturing, and (ii) +superposition coding. Under puncturing, whenever a URLLC +transmission is triggered by a BS in a certain slot, all the +eMBB transmissions therein are dropped. However, the eMBB +service can be still guaranteed in the remaining slots of the +frame where no URLLC users are active. Under superposition +coding, eMBB transmissions occur in all the slots and each +BS linearly combines eMBB and URLLC signals whenever +URLLC transmissions are triggered. +The analytical framework detailed next is generalized, +namely holds for both the aforementioned transmission tech- +niques upon setting, for an arbitrary BS j and slot t, the +coefficient +˜At +j = +� +� +� +� +� +� +� +� +1 − � +i∈Ku +j +At +ji +�+ +, +for puncturing, +1 , +for superposition coding. +Let ςe +jk[n] or ςu +jk[n] be the data symbol transmitted by BS +j to user k over an arbitrary channel use n, if k is an +eMBB user or a URLLC user, respectively. We assume that +ςs +jk[n] ∼ CN (0, 1), with s = {e, u}. A slot consists of nd +channel uses, with nd =⌊τd/T⌋, and equals the length of the +URLLC codeword. The data symbol is precoded by using the +M-dimensional precoding vector wjk, which is function of +the CSI acquired at the BS during the uplink training. It also +holds E +� +∥wjk∥2� += 1. The data signal transmitted by BS j +over an arbitrary channel use n of slot t is given by +xt +j[n] = ˜At +j +� +k∈Ke +j +� +ρe +jkwjkςe +jk[n]+ +� +i∈Ku +j +At +ji +� +ρu +jiwjiςu +ji[n], +(7) +with n = 1, . . . , nd, and where ρe +jk and ρu +ji are the downlink +transmit powers used by BS j to its eMBB user k and URLLC +user i, respectively, satisfying the following per-BS power +constraint +E +���xt +j[n] +��2� += ˜At +j +� +k∈Ke +j +ρe +jk+ +� +i∈Ku +j +At +jiρu +ji ≤ ρmax +j +, +(8) +with j = 1, . . . , L, and where ρmax +j +is the maximum transmit +power at BS j. The data signal received at user k in cell +j over an arbitrary channel use n of slot t is denoted as +yt,s +jk[n], with s = {e, u}. In line with the conventional massive +MIMO operation, we assume that the users do not acquire +the instantaneous downlink CSI, but rather rely on a mean +value approximation of their downlink precoded channels. +Such approximation is accurate if channel hardening occurs. +If user k in cell j is an eMBB user, namely k ∈ Ke +j, then +its received data signal over an arbitrary channel use n of +slot t can be written as in (9) at the top of the next page, +where wjk[n] ∼ CN +� +0, σ2 +d +� +is the i.i.d. receiver noise with +variance σ2 +d, and we have defined gli +jk =(hl +jk)Hwli, namely the +precoded downlink (scalar) channel between the BS in cell l, +using the precoding vector intended for its user i, and the k-th +user in cell j. If user k in cell j is a URLLC user, its received +data signal over an arbitrary channel use n in slot t can be +written as in (10) at the top of the next page. Equation (9) +emphasizes the fact that user k in cell j solely knows the +statistical CSI of the downlink channel, that is E +� +gjk +jk +� +. The +second term in (9) represents the self-interference due to this +lack of instantaneous CSI, referred to as beamforming gain +uncertainty. Going forward, the intra-cell inter-service inter- +ference and intra-cell intra-service interference terms represent +the interference caused by the URLLC and eMBB users of +cell j, respectively. This is presumably stronger than the inter- +cell interference caused by the eMBB users (i.e., intra-service) +and the URLLC users (i.e., inter-service) in the other cells. +A similar distinction of the various signal contributions is +reported in (10) for URLLC user k in cell j. In this case, +the lack of instantaneous CSI at the user will be highlighted +in the next section. +III. PERFORMANCE ANALYSIS +In this section, we evaluate the downlink performance +of eMBB and URLLC users. As per the eMBB users, we +consider the spectral efficiency (SE) by applying the infinite- +blocklength information-theoretic results established in the +ergodic regime [4], [5], [29]. An achievable downlink SE, +namely a lower-bound on the ergodic downlink capacity, +can be obtained by applying the popular hardening bound +technique [4], [5] on the signal model in (9), by treating all +the interference sources as uncorrelated noise. Specifically, an + +ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO +8 +yt,e +jk [n] = E +� +gjk +jk +� +˜At +j +� +ρe +jkςe +jk[n] +� +�� +� +desired signal ++ +� +gjk +jk −E +� +gjk +jk +�� +˜At +j +� +ρe +jkςe +jk[n] +� +�� +� +self-interference ++ +� +i∈Ku +j +gji +jkAt +ji +� +ρu +jiςu +ji[n] +� +�� +� +intra-cell inter-service interference ++ +� +i∈Ke +j\{k} +gji +jk ˜At +j +� +ρe +jiςe +ji[n] +� +�� +� +intra-cell intra-service interference ++ +L +� +l=1 +l̸=j +� +i∈Ke +l +gli +jk ˜At +l +� +ρe +liςe +li[n] +� +�� +� +inter-cell intra-service interference ++ +L +� +l=1 +l̸=j +� +i∈Ku +l +gli +jkAt +li +� +ρu +liςu +li[n] +� +�� +� +inter-cell inter-service interference ++ wjk[n] +� �� � +noise +(9) +yt,u +jk [n]=gjk +jkAt +jk +� +ρu +jkςu +jk[n] +� +�� +� +desired signal ++ +� +i∈Ku +j \{k} +gji +jkAt +ji +� +ρu +jiςu +ji[n] +� +�� +� +intra-cell intra-service interference ++ +L +� +l=1 +l̸=j +� +i∈Ku +l +gli +jkAt +li +� +ρu +liςu +li[n] +� +�� +� +inter-cell intra-service interference ++ +L +� +l=1 +� +i∈Ke +l +gli +jk ˜At +l +� +ρe +liςe +li[n] +� +�� +� +inter-service interference ++ wjk[n] +� �� � +noise +(10) +achievable downlink spectral efficiency of an arbitrary eMBB +user k in cell j, is given by +SEe +jk = τd +τc +1 +T +T +� +t=1 +log2(1 + SINRt,e +jk), [bits/s/Hz] , +(11) +where τd/τc accounts for the estimation overhead, +SINRt,e +jk = +˜At +jρe +jk +���E +� +gjk +jk +���� +2 +L +� +l=1 +K +� +i=1 +ϱt +li E +� +|gli +jk|2 +� +− ˜At +jρe +jk +���E +� +gjk +jk +���� +2 ++σ2 +d +, +(12) +is the effective SINR of user k ∈ Ke +j, where the expectations +are taken with respect to the random channel realizations, and +ϱt +li = +� +At +liρu +li, +if i ∈ Ku +l , +˜At +lρe +li, +if i ∈ Ke +l . +(13) +The expression of the achievable SE shown in (11) holds +for any choice of precoding scheme, any channel estimator +and any channel distributions. Importantly, it accounts for +any choice of coexistence technique between heterogeneous +services, namely puncturing or superposition coding. The +infinite-blocklength analysis above is established upon the +assumption of block-fading channel model, entailing that each +eMBB codeword has infinite length that spans a large number +of independent fading realizations. This assumption cannot +be applied to the URLLC case. As per the URLLC user, +we consider a nonasymptotic analysis of the downlink error +probability on a slot basis by applying the finite-blocklength +information-theoretic results established in [8]. Firstly, we +rewrite (10) as +yt,u +jk [n] = gjk +jkqjk[n] + zjk[n], +n = 1, . . . , nd, +(14) +where qjk[n]=At +jk +�ρu +jkςu +jk[n], and +zjk[n] = +� +i∈Ku +j \{k} +gji +jkqji[n] + +� +i∈Ke +j +gji +jk ˜At +j +� +ρe +jiςe +ji[n] ++ +L +� +l=1 +l̸=j +� +� � +i∈Ku +l +gli +jkqli[n] + +� +i∈Ke +l +gli +jk ˜At +l +� +ρe +liςe +li[n] +� +� ++ wjk[n] . +(15) +However, URLLC user k in cell j has not access to gjk +jk, but +performs data decoding by only leveraging its mean value, +�gjk +jk = E +� +(hj +jk)Hwjk +� +, which is treated as perfect. This +estimate is accurate if channel hardening holds. Notice that, the +precoded channel gjk +jk is frequency-flat and time-invariant over +the transmission of the nd-length URLLC codeword in slot t. +Moreover, gjk +jk remains constant for any other transmission +from BS j to user k over slots in the same TDD frame. +Given all channels and precoding vectors, the effective noise +terms {zjk[n] ∈ C; n = 1, . . . , nd} are random variables +conditionally i.i.d. with variance σ2 +jk, i.e., CN +� +0, σ2 +jk +� +, given +by +σ2 +jk = +� +i∈Ku +j \{k} +At +jiρu +ji|gji +jk|2 + +� +i∈Ke +j +˜At +jρe +ji|gji +jk|2 ++ +L +� +l=1 +l̸=j +� +� � +i∈Ku +l +At +liρu +li|gli +jk|2+ +� +i∈Ke +l +˜At +lρe +li|gli +jk|2 +� +� + σ2 +d . +(16) +To determine the transmitted codeword +qjk = [qjk[1], . . . , qjk[nd]]T , +user k in cell j employs a mismatched scaled nearest-neighbor +(SNN) decoder [30], with which selects the codeword �qjk +from the codebook C by applying the rule +�qjk = arg min +�qjk∈C +���yt,u +jk − �gjk +jk�qjk +��� +2 +, +(17) + +ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO +9 +where yt,u +jk =[yt,u +jk [1], . . . , yt,u +jk [nd]]T ∈Cnd is the received data +vector. +Let ϵdl +jk = Pr {�qjk ̸= qjk} be the downlink error probability +experienced by the URLLC user k in cell j achieved by the +SNN decoding. An upper bound on ϵdl +jk is obtained by using +the standard random-coding approach [31], +ϵdl +jk ≤Egjk +jk +� +Pr +� nd +� +n=1 +ıs(qjk[n], yt,u +jk [n]) ≤ log m−1 +r +����gjk +jk +�� +, +(18) +where m=2b is the number of codewords with length nd that +convey b information bits, r is a random variable uniformly +distributed in the interval [0, 1] and ıs(qjk[n], yt,u +jk [n]) is the +generalized information density, given by +ıs(qjk[n], yt,u +jk [n]) += −s +���yt,u +jk [n] −�gjk +jkqjk[n] +��� +2 ++ +s|yt,u +jk [n]|2 +1+sρu +jk|�gjk +jk|2 ++ log(1+sρu +jk|�gjk +jk|2) , +(19) +for all s > 0. In (18) the expectation is taken over the +distribution of gjk +jk, and the probability is computed with +respect to the downlink data symbol {qjk[n]}nd +n=1, the effective +additive noise {zjk[n]}nd +n=1, and the random variable r. The +evaluation of the upper bound in (18) entails a very demanding +numerical computation to firstly obtain the probability, and +then to numerically tighten the upper bound value to the low +error probability target of the URLLC use case by optimizing +with respect to s. +Luckily, we can reliably approximate the right-hand side +of (18) in closed form, hence with a significant relief of the +computational burden, by using the saddlepoint approximation +provided in [8, Th. 2]. +The existence of a saddlepoint approximation is guaranteed +by the fact that the third derivative of the moment-generating +function of −ıs(qjk[n], yt,u +jk [n]) exists in a neighborhood of +zero delimited by the values ε<0<ε given by [8, Appendix +B] +ε = − +� +(ζb − ζa)2 + 4ζaζb(1 − µ) + ζa − ζb +2ζaζb(1 − µ) +, +(20) +ε = +� +(ζb − ζa)2 + 4ζaζb(1 − µ) − ζa + ζb +2ζaζb(1 − µ) +, +(21) +where +ζa = s(ρu +jk|gjk +jk − �gjk +jk|2 + σ2) , +(22) +ζb = +s +1 + sρu +jk|�gjk +jk|2 (ρu +jk|gjk +jk|2 + σ2) , +(23) +µ = +s2 ���ρu +jk|gjk +jk|2 + σ2 − (gjk +jk) +∗�gjk +jkρu +jk +��� +2 +ζaζb(1 + sρu +jk|�gjk +jk|2) +. +(24) +The saddlepoint approximation hinges on the cumulant- +generating function of −ıs(qjk[n], yt,u +jk [n]) given by +υ(ε) = log E +� +e−εıs(qjk[n],yt,u +jk [n])� +, +(25) +on its first derivative υ′(ζ), and second derivative υ′′(ζ), for +all ε ∈ (ε, ε) +υ(ε) =−ε log(1 + sρu +jk|�gjk +jk|2) +− log(1 + (ζb − ζa)ε − ζaζb(1 − µ)ε2) +(26) +υ′(ε) =− log(1 + sρu +jk|�gjk +jk|2) +− +(ζb − ζa) − 2ζaζb(1 − µ)ε +1 + (ζb − ζa)ε − ζaζb(1 − µ)ε2 +(27) +υ′′(ε) = +� +(ζb − ζa) − 2ζaζb(1 − µ)ε +1 + (ζb − ζa)ε − ζaζb(1 − µ)ε2 +�2 ++ +2ζaζb(1 − µ) +1 + (ζb − ζa)ε − ζaζb(1 − µ)ε2 . +(28) +Let m = endR for some strictly positive transmission rate +R = (log m)/nd, and let ε ∈ (ε, ε) be the solution to the +equation R=−υ′(ε). Let Is be the generalized mutual infor- +mation [30] defined as Is = E {ıs(qjk[1], vjk[1])} = −υ′(0). +Lastly, consider the critical rate [31, Eq. (5.6.30)] given by +Rcr +s += −υ′(1). Then, we have three possible saddlepoint +approximations for the error probability upper bound [8]. +If ε ∈ [0, 1], then Rcr +s ≤ R ≤ Is and +Pr +� nd +� +n=1 +ıs(qjk[n], yt,u +jk [n]) ≤ log endR − 1 +r +� +≈end[υ(ε)+εR] [Ψnd,ε(ε)+Ψnd,ε(1−ε)] , +(29) +where +Ψnd,ε(ℓ) ≜ e +1 +2 ndℓ2υ′′(ε)Q +� +ℓ +� +ndυ′′(ε) +� +. +(30) +If ε > 1, then R < Rcr +s and +Pr +� nd +� +n=1 +ıs(qjk[n], yt,u +jk [n]) ≤ log endR − 1 +r +� +≈ end[υ(1)+R] � +�Ψnd(1, 1) + �Ψnd(0, −1) +� +, +(31) +where +�Ψnd(ℓ1, ℓ2)≜endℓ1[Rcr +s −R+ 1 +2 υ′′(1)] +×Q +� +ℓ1 +� +ndυ′′(1)+ℓ2 +nd(Rcr +s −R) +� +ndυ′′(1) +� +. +(32) +If ε < 0, then R > Is and +Pr +� nd +� +n=1 +ıs(qjk[n], yt,u +jk [n]) ≤ log endR−1 +r +� +≈ 1−end[υ(ε)+εR] [Ψnd,ε(−ε)−Ψnd,ε(1−ε)] . +(33) +The saddlepoint approximation is more accurate in the URLLC +massive MIMO regime than the conventionally-used normal +approximation [23] as the former characterizes the exponential +decay of the error probability, i.e., the error-exponent, as a +function of the URLLC codeword length, and the transmission +rate requirement R, while uses the Berry-Esseen central-limit +theorem (used in the normal approximation) to only charac- +terize the multiplicative factor following the error-exponent +term. The normal approximation, whose formulation directly +involves the generalized mutual information, Is, but does not + +ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO +10 +R, is accurate only when Is is close to R. This operating +regime does not hold for URLLC wherein R is typically lower +than Is to accomplish the very low error probability targets. +Once that the approximate upper bounds on the downlink error +probability are obtained via saddlepoint approximation, we +compute the downlink network availability [8], ηdl, as +ηdl = Pr +� +ϵdl +jk ≤ ϵdl +target +� +(34) +which measures the probability that the target error probability +ϵdl +target is satisfied by an arbitrary user k in cell j, in presence of +interfering users. While the expectation in the error probability +definition is taken with respect to the small-scale fading and +the effective additive noise, given a large-scale fading real- +ization, the probability in the network availability definition is +computed with respect to the large-scale fading (i.e., path loss, +shadowing etc.). The expression of the network availability +shown in (34) holds for any choice of precoding scheme, any +channel estimator and any channel distributions. Importantly, +it accounts for any choice of coexistence technique between +heterogeneous services, namely puncturing or superposition +coding. +IV. PRECODING AND POWER CONTROL +The choice of the precoding scheme and of the downlink +power allocation deeply affects the SE of the eMBB users and +the network availability for the URLLC users. For the sake of +comparison, we herein consider three precoding schemes and +three power allocation strategies. The general expression for +the precoding vector intended for user k in cell j is given by +wjk = +vjk +∥vjk∥ , +(35) +where the denominator serves to make the average power of +the precoding vector unitary, and vjk is next characterized. +Multi-cell MMSE (M-MMSE): +vM−MMSE +jk += +� +� +� L +� +l=1 +�Hj +l Pl( �Hj +l )H+Υj+σ2 +uIM +�−1 +�Hj +jPj +� +� +:,k +where Pl =diag(pli, . . . , plK)∈RK×K is the matrix with the +uplink transmit powers of all the users in cell l as diagonal +elements, Υj = �L +l=1 +�K +i=1 pliCj +li, and �Hj +l = [�hj +l1 . . . �hj +lK]. +M-MMSE precoding provides a nearly optimal downlink SE +but requires each BS to acquire CSI and statistical CSI of all +the users of the multi-cell system. Moreover, the computation +of the precoding vector, which entails inverting a matrix +M ×M, may be demanding for large BS arrays. Although +impractical, M-MMSE precoding will serve as benchmark. +Regularized zero-forcing (RZF): +vRZF +jk += +� +�Hj +j +� +( �Hj +j)H �Hj +j + σ2 +uP−1 +j +�−1� +:,k +. +Compared to M-MMSE, RZF precoding requires each BS to +estimate the channels of only its users. Moreover, computing +the RZF precoding vector is computationally cheaper since +the size of the matrix to be inverted is K×K. However, RZF +does only suppress the intra-cell interference while, unlike M- +MMSE, does not provide to the users any protection mech- +anism against inter-cell interference and channel estimation +error. +Maximum Ratio (MR): vMR +jk = �hj +jk. It is computationally the +cheapest but performance-wise the worst precoding scheme. +MR only aims at maximizing the power of the desired signal, +providing no interference-suppression mechanism. MR will +serve as lower bound on the performance. +Properly allocating the downlink power can make all the +difference to meet the strict reliability requirements of the +URLLC and to improve the SE of the eMBB users. Next, we +provide three power allocation schemes that take into account +the power budget at the BSs, the adopted eMBB-URLLC +coexistence strategy and the URLLC activation pattern, which +is known at the BS in the downlink operation. +Equal power allocation (EPA): It consists in setting +ρu +ji = ρmax +j +At +ji +˜At +jKe + � +k∈Ku +j +At +jk +, i ∈ Ku +j +(36) +ρe +jk = ρmax +j +˜At +j +˜At +jKe + � +i∈Ku +j +At +ji +, k ∈ Ke +j +(37) +to satisfy the per-BS power constraint in (8) with equality and +allocate the same share of power to each user, regardless of +its channel conditions and its service requirements. +Weighted fractional power allocation (FPA): it consists in +setting the powers as +ρu +ji = +ωρmax +j +At +ji(βj +ji) +ν +(1−ω) ˜At +j +� +k∈Ke +j +(βj +jk) +ν + ω � +u∈Ku +j +At +ju(βj +ju) +ν , i ∈ Ku +j +(38) +ρe +jk = +(1 − ω)ρmax +j +˜At +j(βj +jk) +ν +(1−ω) ˜At +j +� +e∈Ke +j +(βj +je) +ν + ω � +i∈Ku +j +At +ji(βj +ji) +ν , k ∈ Ke +j +(39) +where the weight ω ∈ (0, 1) adjusts the amount of downlink +power to be allocated to the URLLC users, while ν establishes +the power control policy as a function of the average channel +gain. An opportunistic power allocation is attained by setting +ν > 0, with which more power is allocated to the users with +better channel conditions. Conversely, fairness is supported +by setting ν < 0, with which more power is allocated to the +users with worse channel conditions. If ω ∈ (0.5, 1) a larger +share of power is allocated to the URLLC users rather than +to the eMBB users, whereas it is the other way around if +ω ∈ (0, 0.5). Notice that, if ν = 0 and ω = 0.5, then the FPA +reduces to the EPA. +Optimal power allocation (OPA) for max product SINR: + +ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO +11 +The powers are the solution of the optimization problem +maximize +{ρs +jk} +L +� +j=1 +K +� +k=1 +SINRt,s +jk +(40a) +s.t. +K +� +k=1 +ϱt +jk ≤ ρmax +j +, ∀j, +(40b) +where the superscript s = e if user k ∈ Ke +j, s = u otherwise, +and ϱt +jk is given in (13). Without further entangling the +notation in (40), we remark that the SINR of inactive users +is fictitiously set to 1 to preserve the optimization problem +formulation. This power allocation strategy treats all the users +as eMBB users, hence it would be optimal if there would +be no URLLC users active in a given slot, by maximizing a +lower bound on the sum SE of the multi-cell system. Although +the SINR expression in (12) is meaningless when applied to +a URLLC user, we can still heuristically plug the URLLC +powers resulting from (40) into the error probability analysis +and motivate this approach by looking at the performance. +All the considered power allocation schemes, in principle, run +on a slot-basis in order to adapt the power coefficients to the +URLLC activation pattern. Fortunately, these schemes only +rely on the knowledge of the statistical CSI which allows to +pre-compute some power coefficients or to keep the power +allocation for multiple slots/frames in case of no macroscopic +changes in the propagation environment. Unlike the EPA and +the FPA schemes, the OPA scheme requires a certain degree +of cooperation among the BSs which must send statistical CSI +to let a central processing unit (e.g., a master BS) compute the +SINR of all the users and solve the optimization problem, and +feed them back with the power coefficients to use. This would +introduce intolerable delay for the URLLC users. Moreover, +solving problem (40), although efficiently as a geometric +program [5, Th. 7.2], is unlikely to be doable within a time- +slot, especially for crowded networks. Hence, the OPA scheme +is of limited practical use, but will serve for benchmarking +purposes. +V. SIMULATION RESULTS +In this section, we present and discuss the results of our +simulations in which the coexistence of eMBB and URLLC +is deeply analyzed under different setups. Specifically, we shed +the light on the impact of different factors on the performance, +such as the transmission technique and the precoding scheme, +the power control strategy, the imperfect CSI and estimation +overhead, the pilot contamination, the length and number of +slots in a TDD frame, and the characteristics of the URLLC +activation pattern. +Our simulation scenario consists of a multi-cell massive +MIMO system with L = 4 cells. Each cell covers a nominal +area of 500×500 squared meters, and is served by a BS, placed +at the cell center, equipped with a uniform linear array (ULA) +with M = 100 equispaced half-wavelength antenna elements. +A wrap-around topology is implemented as in [5, Sec. 4.1.3]. +The users are dropped uniformly at random over the coverage +area but at a minimum distance of 25 m from the BS. In +addition, we assume that the URLLC users are distributed +uniformly at random in an area of 125×125 squared meters +that surrounds the BS. A random realization of the user +locations determines a set of large-scale fading coefficients +and constitutes a snapshot of the network. For a given network +snapshot the achievable downlink SEs of the active eMBB +users are computed according to (11), while the downlink +error probabilities of the URLLC users are obtained according +to the approximations (29)-(33). The cumulative distribution +function (CDF) of the SE and the network availability are then +drawn over many network snapshots. The channel correlation +matrices are generated according to the popular local scatter- +ing spatial correlation model [5, Sec. 2.6], and we assume that +the scattering is only localized around the users and uniformly +distributed at random with delay spread 25◦ degrees [8]. The +average channel gain is obtained according to the non-line- +of-sight macro cell 3GPP model for 2 GHz carriers [32], and +given in dB by +βk =−35.3 − 37.6 log10 +� dk +1 m +� ++ Fk +for an arbitrary user k placed at a distance dk from its BS, and +where Fk ∼ N +� +0, σ2 +sh +� +models the log-normal shadowing as +an i.i.d. random variable with standard deviation σsh = 4 dB. +The transmission bandwidth is 20 MHz, and the receiver noise +power equals -94 dBm both for the uplink and the downlink. +Moreover, we let ρmax +j +=46 dBm, j =1, . . . , L, and the uplink +transmit power, both for pilot and payload data, be 23 dBm +for all the users. We assume that the URLLC packet consists +of b=160 bits, yielding a transmission rate R=b/nd, which +is suitable for factory automation use cases, such as motion +controls, and in line with the low latency requirements [33, +Annex A]. Lastly, without loss of generality, we set τu =0 as +we only focus on the downlink performance. Unless otherwise +stated, we consider TDD frames with length τc = 580 channel +uses, given by Tc = 2 ms and Bc = 290 kHz, which supports +user mobility up to 67.50 km/h. +In the first set of simulations we consider the following +setup: K = 20, α = 0.2, au = 10−0.5, τp = 80 (no pilot +contamination), T = 5 slots of length nd = 100 channel +uses. In Fig. 2 we plot the CDFs of the achievable downlink +SE per “active” eMBB user obtained for different precoding +and power allocation strategies, both for superposition coding +(top subfigure) and puncturing technique (bottom subfigure). +Under these assumptions, SPC is greatly superior than PUNC, +precoding and power allocation strategies being equal. M- +MMSE with OPA gives, as expected, the best SE but EPA per- +forms almost equally well, regardless of the precoding scheme. +RZF provides a practical excellent trad-off between M-MMSE +and MR. These results suggest that we are approximately +operating in an interference-free scenario, thanks to the full +and partial interference-suppression mechanism provided by +M-MMSE and RZF, respectively. As per the FPA strategy, +in these simulations we have selected ν = 0.5 to promote +an opportunistic power allocation and ω = 0.6 to prioritize +the URLLC users. Such a choice does not favor the eMBB +users and justify the worst performance of FPA among the +considered strategies when SPC is applied. + +ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO +12 +Fig. 2. +CDFs of the achievable downlink SE per active eMBB user, for +different transmission, precoding and power allocation strategies. Settings: +K =20, α=0.2, au =10−0.5, τp =80, T =5, nd =100. +Fig. 3. +CDFs of the achievable downlink sum SE per cell, for different +transmission, precoding and power allocation strategies. Settings: K = 20, +α=0.2, au =10−0.5, τp =80, T =5, nd =100. +Same conclusions hold for the results shown in Fig. 3 where +the CDFs of the corresponding sum SE per cell are illustrated. +In these figures, we mainly emphasize the eMBB service +outage likely occurring when PUNC is adopted. We define +the eMBB service outage, under PUNC operation, as +ς out = Pr +� +� +� +� +k∈Ke +j +SEe +jk = 0 +� +� +� , +j =1, . . . , L , +where the probability is computed with respect to the large- +scale fading. This probability for a BS to provide no service +in a TDD frame to its eMBB users depends on the activation +pattern of the URLLC users and the number of slots per +frame. We will discuss this aspect in detail later. Under the +SPC +M-MMSE +RZF +MR +0 +0.2 +0.4 +0.6 +0.8 +1 +PUNC +M-MMSE +RZF +MR +0 +0.2 +0.4 +0.6 +0.8 +1 +Fig. 4. +Network availability for different transmission, precoding and power +allocation strategies. Settings: K = 20, α = 0.2, au = 10−0.5, τp = 80, +T =5, nd =100. +Fig. 5. +Downlink per-user error probability for different transmission and +precoding strategies. Settings: EPA, K =20, α=0.2, au =10−0.5, τp =80, +T =5, nd =100. +settings considered in Fig. 3, the eMBB service outage is quite +significant as amounts to about 30%. +In Fig. 4 we move to the URLLC performance by showing +the downlink network availability achieved when ϵdl +target = +10−5. Despite the interference caused by the eMBB users +when SPC is performed, both M-MMSE and RZF are able to +provide levels of network availability close to one, in line with +PUNC, revealing a great ability of suppressing the interference +and supporting high reliability. Conversely, MR provides poor +performance in SPC when EPA or OPA (which is optimal for +the eMBB users) schemes are used. Notice that, our choice for +the parameters of the FPA scheme pays off for the combination +SPC/MR. The network availability values shown in Fig. 4 are +obtained by the error probabilities whose CDFs are illustrated +in Fig. 5. To better understand its meaning, the network +availability is given by the cross-point between the CDF of the +per-user error probability and the vertical line representing the +error probability target value, as Fig. 5 highlights (blue circle +markers). From this set of simulations, we conclude that SPC +is clearly superior to PUNC in terms of SE yet providing very +high network availability, when M-MMSE or RZF are carried +out. If MR is the only viable option (for instance due to strict +complexity or hardware constraints), then SPC with FPA, upon +properly setting the design parameters ν and ω, is an effective +choice to keep the network availability high while preventing + +0.5 +SPC0 +1 +2 +3 +4 +per-user SE [bit/ +1 +0.5 +PUNC +0 +0.5 +1 +1.5 +2 +per-user SE [bit/s5 +6 +7 +8 +s/Hzl +MR +RZF +M-MIMSE +-FPA,V=0.5.w=0.6 +-EPA +--OPA +2.5 +3 +3.5 +4 +s/Hzl0.5SPC0 +0 +10 +20 +30 +40 +50 +per-cell sum SE [bi +1 +0.5 +PUNC +eMBB service outage +0 +0 +10 +20 +30 +per-cell sum SE [bi60 +70 +80 +90 +/s/Hz) +MR +RZF +M-MMSE +-FPA,V=0.5.W=0.6 +-EPA +-OPA +40 +50 +60 +/s/HzlPUNC/M-MMSE +PUNC/RZF +0.8 +PUNC/MRT +SPC/M-MMSE +SPC/RZFutage +error +robabilityCDF +SPC/MRT +0.4 +FPA. v=0.5. w=0.6 +M-MMSE +andRZF +0.2 +0 +10-50 +10-40 +10-30 +10-2 +DL error probabtarget +URLLC service +MRT +20 +10-10 +10-5 +100 +ilityON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO +13 +Fig. 6. +Average per-user SE achieved by SPC with FPA, for different +precoding schemes and values of ν, ω. The average is taken over 200 network +snapshots. Settings: K = 20, α = 0.2, au = 10−0.5, τp = 80, T = 5, +nd =100. +any eMBB service outage. +In this regard, we now focus on how to select ν and ω +appropriately. By using the same settings as in the first set +of simulations, in Fig. 6 we plot the average per-user SE +assuming SPC and different precoding schemes with FPA +as ν and ω vary. From the eMBB user perspective, it is +preferable setting a small value for ω, and ν in the interval +[−0.5, 0]. While the former is trivial, the latter needs further +discussions. Indeed, recall that positive values for ν enable +allocating more power to users with better channel conditions. +Since we assume the URLLC users are uniformly distributed +in a smaller area surrounding the BSs, it is very likely that they +are closer to the BS than most of the eMBB users. Therefore, +negative values for ν increase the fairness and improve eMBB +users performance. Large values for both ω and ν excessively +unbalance the power distribution in favor of the URLLC users, +degrading the SE of the eMBB users. +Conversely, small values for both ω and ν break down the +network availability of the URLLC users in SPC operation, +as clearly seen in Fig. 7. Nevertheless, both M-MMSE and +RZF are able to provide levels of network availability close +to 1 except when ν = −1, while MR is quite sensitive to +this parameters tuning. Suppressing the multi-user interference +is of a vital importance when SPC is adopted, and RZF, +although not dealing with the inter-cell interference, is an +excellent trade-off between performance and practicality. Fine- +tuning the parameters of the FPA scheme yields satisfying +performance when using MR. FPA becomes a valid, heuristic +alternative to combat the multi-user interference whenever the +latter cannot be removed by the precoding technique. +Setting ω becomes pointless when using PUNC with FPA as +only URLLC transmissions take place in the considered slot. +Hence, in Fig. 8 and Fig. 9 we focus on the average SE per user +and the network availability as only ν varies. For both cases we +notice that an equal power allocation, i.e., ν =0, is desirable. +As per the SE of the eMBB users, negative values of ν support +Fig. 7. +Network availability achieved by SPC with FPA, for different +precoding schemes and values of ν, ω. Settings: K = 20, α = 0.2, +au =10−0.5, τp =80, T =5, nd =100. +-1 +-0.5 +0 +0.5 +1 +0.2 +0.4 +0.6 +0.8 +1 +1.2 +1.4 +Fig. 8. +Average per-user SE (with 95% confidence interval) achieved by +PUNC with FPA, for different precoding schemes and values of ν. The average +is taken over 200 network snapshots. Settings: K =20, α=0.2, au =10−0.5, +τp =80, T =5, nd =100. +lower SEs (e.g., the 95%-likely SE per user), hence the fairness +among the users, while large positive values of ν support the +peak SE in a greedy fashion, neglecting lower SEs. Therefore, +ν =0 is sound if the average SE is targeted, especially when +the multi-user interference is partially or fully canceled. As per +the network availability of the URLLC users, any choice of +ν ∈ [−1, 1] is solid as long as M-MMSE or RZF are employed, +while the performance of MR is relatively penalized whenever +a non-neutral choice for ν is taken. Presumably, the number of +URLLC users simultaneously active in the same slot (resulting +from the chosen values of α and au) is such that the multi-user +interference is not significant. +Next, we evaluate the performance as a function of the +number of the slots in a TDD frame, T, and the size of +the slot, nd, which in turn determines the URLLC codeword +length. In this set of simulations and hereafter, we omit the +results achieved by MR and only consider FPA with ν = 0 +and ω = α motivated by the previous results. Fig. 10 shows +the CDFs of the sum SE per cell, for three different setups: + +St +4.5 +4uperpositionCoding3.5 +3 +2.5 +2 +RZF +1.5 +1 +0.5 +MR +0 +0.2 +0.4 0.5 0.6 +0.8 +0.95 +-1 +-0M-MMSE +0.5 +1 +0 +.5 +VSuperposition Coding +10.8 +M-MMS +0.6 +MR +0.4 +0.2 +0 +0.95 +0.8 +0.6 +0.5 +m +0.4 +0.2 +1E +RZF +-1 +-0.5 +0 +0.5ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO +14 +-1 +-0.5 +0 +0.5 +1 +0.8 +0.85 +0.9 +0.95 +1 +Fig. 9. +Network availability achieved by PUNC with FPA, for different +precoding schemes and values of ν. Settings: K =20, α=0.2, au =10−0.5, +τp =80, T =5, nd =100. +0 +10 +20 +30 +40 +50 +60 +70 +80 +90 +0 +0.5 +1 +0 +10 +20 +30 +40 +50 +60 +70 +80 +90 +0 +0.5 +1 +0 +10 +20 +30 +40 +50 +60 +70 +80 +90 +0 +0.5 +1 +SPC +SPC +PUNC +PUNC +PUNC +SPC +Fig. 10. +CDFs of the achievable downlink sum SE per cell, for different +transmission and precoding strategies, as the number of slots per frame varies. +Settings: FPA with ν = 0 and ω = 0.2, K = 20, α = 0.2, au = 10−0.5, +τp =80. +(i) nd =25, T =20, (ii) nd =50, T =10, and (iii) nd =100, +T = 5. The structure of the TDD frame has not a significant +impact on the SE of the eMBB users when SPC is used. +Conversely, that deeply affects the per-cell sum SE in case +of PUNC. Indeed, increasing the number of slots per frame +makes the probability of having eMBB service outage smaller +as it increases the opportunities for an eMBB user to find slots +with no active URLLC users. This argument is supported by +the results in Fig. 10 in which the eMBB service outage equals +0.01, 0.0725 and 0.2875 when T = 20, T = 10 and T = 5, +respectively. On the other hand, with fewer slots, eMBB users +might be active for longer time, thereby experiencing higher +SE. This explains the larger variations of the per-cell sum SE +as T is decreased. +The length of the slot directly affects the performance of +the URLLC users. As we can see in Fig. 11, the network +availability increases drastically with the length of the slot +(i.e., the URLLC codeword length). In fact, the length of +the URLLC codeword determines the transmission rate of the +URLLC users as R=b/nd, thus the shorter the codeword the +Fig. 11. +Network availability, for different transmission and precoding +strategies, as the length of the slot varies. Settings: FPA with ν = 0 and +ω =0.2, K =20, α=0.2, au =10−0.5, τp =80. +higher the rate requirement to be reliably achieved and, in turn, +the larger the error probability.2 Again, SPC is the technique +that overall guarantees the best performance to both the eMBB +and URLLC users as its main limitation, namely the caused +multi-user interference, is overcome by using interference- +suppression-based precoding schemes. Lastly, although letting +the URLLC transmissions span many channel uses is benefi- +cial in terms of network availability, the latency requirements +impose to localize the transmissions in time. +Now, we move our focus on the impact of the pilot +contamination and estimation overhead on the performance. +By fixing the TDD frame length and the number of slots +per frame, we vary the length of the uplink training, hence +the number of available orthogonal pilots, and the length of +each slot accordingly. In Fig. 12 we show how the average +sum SE per cell evolves in different operating regimes with +respect to the uplink training length. In these simulations, +we assume K = 20, α = 0.2, τc = 580 and T = 5. Small +values of τp entails low channel estimation overhead but high +levels of pilot contamination which reduces the effectiveness +of the precoding. Our pilot assignment scheme preserves the +performance of the URLLC users by assigning them unique +pilots if available, otherwise pilots are assigned randomly and +contamination hits any user indiscriminately. The maximum +number of URLLC users potentially active in this scenario +is, according to the chosen parameter, 16. Hence, pilots are +assigned randomly when τp = 10 causing both intra- and +inter-cell pilot contamination and providing a low sum SE +per cell, namely about 30 bit/s/Hz with SPC and less than 10 +bit/s/Hz with PUNC. The performance worsens when τp =20 +as the eMBB users have to share only 4 orthogonal pilots +since the protection mechanism of the URLLC users is now +triggered. As we increase the value of τp, the intra-cell pilot +2The random-coding union bound in (18) defines the error probability as +the probability that the average generalized information density is smaller than +the transmission rate requirement. + +0.8Network Availability, +0.6 +0.4 +0.2 +0 +RZF/SPC +M-MMSE/SPC +nd=25, T=20 +RZF/PU +nd=50,T=10 +nd=100, T=5NC +M-MMSE/PUNCON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO +15 +Fig. 12. +Average SE per cell (with 95% confidence interval), for different +transmission and precoding strategies, as τp (and nd) varies. The average is +taken over 200 network snapshots. Settings: FPA with ν = 0 and ω = 0.2, +K =20, α=0.2, au =10−0.5, τc =580, T =5. +contamination is primarily reduced by assigning orthogonal +pilots to eMBB users of the same cell. If τp ≥32 then intra-cell +pilot contamination is prevented and the inter-cell interference +among the eMBB users remains the only impairment. The +sum SE per cell keep growing up to τp = 80, when all +the users in the network are assigned mutual orthogonal +pilots and the benefits of having no pilot contamination at all +overcome the penalty from increasing the estimation overhead. +Trivially, there are no benefits in the channel estimation when +further increasing τp, while the estimation overhead turns to +be expensive and drastically lowers the sum SE per cell. +Finally, notice that RZF and M-MMSE provide essentially +the same performance when both the intra- and inter-cell pilot +contamination occur, because the ability of suppressing the +multi-user interference is poor for both the schemes. +As per the URLLC users, pilot contamination heavily affects +the network availability when τp <16, especially when SPC is +employed and despite a long slot lowers the rate requirements, +as we can observe in Fig. 13. Pilot contamination among +URLLC users is destructive mainly because they are likely to +be close to the BS and to each other, experiencing strong inter- +ference that cannot be resolved when their channel estimates +are correlated. Hence, our approach aiming at prioritizing the +URLLC users in the pilot assignment is technically sound. In +addition, increasing the estimation overhead deeply penalizes +the network availability since more resources are subtracted to +the data transmission, namely the slot length reduces and, as +already explained earlier, the rate requirements of the URLLC +users increase. +Next we study how the performance are affected by the +random activation pattern and the number of potentially active +URLLC users per frame. Fig. 14 shows the average sum SE +per cell as au and α vary, assuming different transmission +and precoding schemes, and FPA with ν = 0 and ω = α. +Notice that, proportionally increasing ω to α is a reasonable +Fig. 13. +Network availability, for different transmission and precoding +strategies, as τp (and nd) varies. Settings: FPA with ν = 0 and ω = 0.2, +K =20, α=0.2, au =10−0.5, τc =580, T =5. +Fig. 14. +Average SE per cell, for different transmission and precoding +strategies, as au and α vary. The average is taken over 200 network snapshots. +Settings: FPA with ν = 0 and ω = α, K = 20, τc = 580, f = 4, T = 5, +nd =100. +approach for SPC as more power is allocated to an increasing +number of potentially active URLLC users, especially for +large values of au. In these simulations, we assume two TDD +frame configurations: (i) f = 4, T = 5, nd = 100, and (ii) +f = 3, T = 8, nd = 65 (whose results are instead shown +in Fig. 15). First, we observe that similar average sum SE per +cell can be achieved by adopting the considered TDD frame +configurations: pilot contamination is what slightly degrades +the performance of the eMBB users when using the second +frame configuration. The performance of PUNC converges to +that of SPC when au ≥ 10−2, hence for sparse activation +patterns, as expected. Again, the performance gap between +RZF and M-MMSE reduces in the second scenario (Fig. 15) +as the inter-cell pilot contamination decreases the ability of +M-MMSE in suppressing the multi-user interference. PUNC +provides eMBB service outage for large values of au, whereas + +70 +--.--RZF +ZH +I--- M-MMSE +60 +[bit/s/ +everyone +contam. +50 +ellSPCper +40 +inter-and intra-cell pilot contar +intra-cell pilo +inter-cellpilot +SE +contam. +30 +eMBB users +eMBB users +20 +inter- and +10 +PL +0 +114 +112 +110 +108 +104 +100 +10 +T +20 +30 +40 +60 +80 +pno pilot +contamination +JNC +96 +86 +76 +66 +56 +100 +150 +200 +250 +300AL +PUNC +tam.everyone +0.95 +SPC +ility, +....Network Availal +inter-and intra-cell pilot co +inter-and intra-cell p +contam. eMBB users +inter-cell pilot +contam. +0.85 +eMBB users +0.8 +0.75 +114 +112 +110 +108 +104 +100 +10 +20 +30 +40 +60 +80 +KQno pilot +contamination +-.----RZF +------ M-MMSE +96 +86 +76 +66 +56 +100 +150 +200 +250 +300f-4,T=5, +SPC +70 +M-MMSIna = 100bit/s/Hz +60 +50 + SE per cell [ +40 +30 +20 +10 +PUNC +0 +100 +10-1 +10-2 +10-3 +10-4RZF +0.8 +0.6 +0.4 +0.2ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO +16 +Fig. 15. +Average SE per cell, for different transmission and precoding +strategies, as au and α vary. The average is taken over 200 network snapshots. +Settings: FPA with ν = 0 and ω = α, K = 20, τc = 580, f = 3, T = 8, +nd =65. +Fig. 16. +Network availability, for different precoding strategies, as au and +α vary. The average is taken over 200 network snapshots. Settings: SPC and +FPA with ν =0 and ω =α, K =20, τc =580. Two TDD frame configurations +are considered. +SPC is still able to cancel the URLLC user interference and to +provide excellent SEs. Lastly, we observe that if the 80% of +the users requests URLLC, then the performance of the eMBB +users is reduced of almost one third with respect to the case +α=0.2. This result is mainly due to the chosen value of ω in +the FPA scheme that aims to favor the URLLC performance +as the number of URLLC users increases. +The performance achieved by the two considered TDD +frame configurations appreciably differ in terms of network +availability as shown in Fig. 16 for SPC and Fig. 17 for PUNC. +In both cases, reducing the length of the slot leads to about +a 10% performance loss, while the pilot contamination only +concerns the eMBB users. This performance gap is slightly +more pronounced when using PUNC because the entire BS +power is distributed among the URLLC users causing stronger +mutual interference. Overall, the first TDD frame configuration +turns to be quite robust to any of the considered transmission +Fig. 17. +Network availability, for different precoding strategies, as au and +α vary. Settings: PUNC and FPA with ν =0 and ω =α, K =20, τc =580. +Two TDD frame configurations are considered. +Fig. 18. +eMBB service outage, for different precoding strategies, as au and +α vary. Settings: PUNC and FPA with ν =0 and ω =α, K =20, τc =580. +Two TDD frame configurations are considered. +and precoding strategies, considered random URLLC activa- +tion pattern and URLLC user load. +A final aspect to be analyzed for this set of simulations +is how the probability of eMBB service outage varies with +au and α when PUNC is adopted. This would complete +the picture on which operating points PUNC is an effective +choice for the eMBB users too, and importantly, further remark +the relevance of properly structuring the TDD frame. As +we can see in Fig. 18, the advantage of adopting the TDD +frame configuration with T = 8 slots, when using PUNC, +consists in better preventing the eMBB service outage than the +configuration with T = 5. For instance, when au = 10−1 and +α=0.8 or α=0.6, partitioning the share of the frame devoted +to the data transmission in 8 slots enables to halve the eMBB +outage service compared to the case where 5 slots are adopted. +Overall, PUNC can compete with SPC only in scenarios with +low URLLC traffic loads, upon properly structuring the TDD +frame, as long as a moderate eMBB performance loss is +tolerated, either in terms of sum SE per cell or of eMBB +service outage. On the other hand, SPC hinges on precoding +schemes able to suppress the multi-user interference which, in +turn, leverages the spatial degrees of freedom available at the + +f-3,T= 8 +70 +SPCnd = 65[bit/s/Hz +V- +60 +50 +SE per cell [ +40 +30 +20 +10 +PUNC +0 +100 +10~1 +au +10-2 +10-3 +10-4RZF +0.8 +0.6 +0.4 +0.2Superposition Coding0.98 +Availability, +0.96 +0.94 +M-MMSE +0.92 +Network +0.9 +0.88 +0.86 +RZF +100.0 +10-0.1 +f =3, T =8, nd=6 +10-0.5 +10-1.0 +0.24, T = 5, nd = 100 +0.8 +0.6 +0.4 +3=0Puncturing0.98 +Network Availability, +0.96 +f +0.94 +0.92 +0.9 +f = 3,T - 8,nd = 65 +0.88 +0.86 +RZF +100.0 +10-0.1 +10-0.2 +10-0.5 +10-1.0 +0.2, T = 5,nd = 100 +M-MMSE +0.8 +0.6 +0.4 +α=wf =3,T =eMBB service outage +eMBB service outage +0.8 +0.8 +0.6 +0.6 +0.4 +0.2 +0.2 +0 +0 +10-1.0 +du +10-2.0 +0.8 +0.8 +0.6 +0.6 +10-3.0 +0.4 +0.4 +10-4.0 +0.2 +310-0.2 +10-0.5 +10-1.0 +du +10-3.0 +10-4.0 +0.2ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO +17 +10 +20 +30 +40 +50 +60 +0 +20 +40 +60 +80 +100 +120 +SPC +SPC +PUNC +PUNC +Fig. 19. +Average SE per cell (with 95% confidence interval), for different +transmission and precoding strategies, as K and τc vary. The average is taken +over 200 network snapshots. Settings: FPA with ν =0 and ω =0.2, α=0.2, +au =10−1, f =3, T =5. +BS and the high accuracy of the acquired CSI. +Finally, we evaluate the performance varying the total +number of users and the TDD frame length. Fig. 19 shows +the average sum SE per cell, for different transmission and +precoding strategies, as the number of users per cell, K, grows +from 10 to 60, and considering two different TDD frame +lengths, namely 580 and 300 channel uses. The latter may +support a shorter coherence time and a narrower coherence +bandwidth as well as a higher user mobility compared to the +case with 580 channel uses. However, a shorter frame entails +less resources that can be allocated to the data transmission +and uplink training. In these simulations we assume FPA with +ν = 0 and ω = 0.2, α = 0.2, au = 10−1, T = 5 and pilot reuse +factor f = 3. Moreover, as τp = fK and τc is fixed, for each +value of K we have different configurations of uplink training +and slot length, i.e., τp and nd, respectively. From Fig. 19 +we observe the average sum SE per cell increasing with K, +which demonstrates the great ability of SPC with M-MMSE +and RZF to spatially multiplex the users. The average sum SE +per cell saturates for values of K larger than 60 for τc =580, +and around 40 for τc = 300 wherein the channel estimation +overhead heavily burden the SE. PUNC is far inferior to +SPC because allocates less resources to the eMBB users and +the performance gap increases with K as the number of +URLLC users per cell grows proportionally. Therefore, letting +K increase makes punctured slots more likely, which not only +subtracts resources to the eMBB user reducing its SE but also +increases the eMBB service outage, as shown in Table III. +Notice that, the eMBB service outage does not change when +varying τc as long as T is fixed. +Table III and Table IV show the network availability for +different transmission and precoding strategies, and different +values of K, also emphasizing how τp and nd vary accordingly +to meet the TDD frame length. In particular, Table III shows +the performance achieved by considering τc = 580, while +TABLE III +NETWORK AVAILABILITY AND EMBB SERVICE OUTAGE, τc =580 +K +τp +nd +ηdl +ς out +SPC +PUNC +PUNC +M-MMSE +RZF +M-MMSE +RZF +10 +30 +110 +0.9989 +0.9966 +1 +0.9989 +0.0012 +20 +60 +104 +0.9988 +0.9957 +0.9944 +0.9906 +0.0038 +30 +90 +98 +0.9988 +0.9950 +0.9934 +0.9893 +0.0225 +40 +120 +92 +0.9969 +0.9885 +0.9881 +0.9819 +0.0625 +50 +150 +86 +0.9864 +0.9787 +0.9790 +0.9672 +0.1050 +60 +180 +80 +0.9807 +0.9697 +0.9728 +0.9601 +0.1737 +TABLE IV +NETWORK AVAILABILITY AND EMBB SERVICE OUTAGE, τc =300 +K +τp +nd +ηdl +ς out +SPC +PUNC +PUNC +M-MMSE +RZF +M-MMSE +RZF +10 +30 +54 +0.7936 +0.7683 +0.7844 +0.7534 +0.0012 +20 +60 +48 +0.6786 +0.6353 +0.6905 +0.6685 +0.0038 +30 +90 +42 +0.4796 +0.4296 +0.5646 +0.5435 +0.0225 +40 +120 +36 +0.1813 +0.1457 +0.3192 +0.3192 +0.0625 +50 +150 +30 +0.0021 +0 +0.0250 +0.0250 +0.1050 +60 +180 +24 +0 +0 +0 +0 +0.1737 +Table IV shows the performance achieved with τc = 300. +The TDD frame with τc = 580 allows to achieve a network +availability above 96% up to 60 users per cell (of which 12 +are URLLC users) with any of the considered transmission +and precoding techniques, meaning that such an amount of +resources are sufficient to excellently support the considered +URLLC user loads and their activation pattern. Conversely, +the network availability supported by the TDD frame with +τc = 300, reported in Table IV, is considerably lower, even +close (or equal) to zero for K ≥50, emphasizing how sensitive +the network availability is to the length of the TDD frame, +hence to the amount of available resources. Importantly, we +observe the decreasing trend of the network availability as +K increases, which for PUNC is milder and mainly due to +the shorter URLLC codeword length, but for SPC is severe +and mainly due to the increase of the multi-user interference. +Indeed, the results in Table IV clearly confirms that PUNC is +more robust than SPC when K ≥20. +VI. CONCLUSION +In this paper, we considered the non-orthogonal multiplex- +ing of heterogeneous services, namely the enhanced mobile +broadband (eMBB) and the ultra-reliable low-latency commu- +nication (URLLC), in the downlink of a multi-cell massive +MIMO system. eMBB and URLLC have opposite characteris- +tics and diverse requirements. eMBB transmissions involve a +large payload that spans multiple radio frames, and demand for +high spectral efficiency. While, URLLC users intermittently +transmit small payloads in a very short time demanding for +low latency and successful probability in the order of 10−5. +Such a heterogeneity calls for effective resource allocation +strategies to let eMBB and URLLC peacefully coexist. Firstly, +we provided a unified information-theoretic framework to +assess the spectral efficiency (SE) of the eMBB in the infinite- +blocklength ergodic regime, and the error probability of the +URLLC in the nonasymptotic finite-blocklength regime. Both + +ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO +18 +analyses encompass imperfect channel state information (CSI) +acquisition at the base stations (BSs) via uplink pilot trans- +missions, pilot contamination and pilot overhead, spatially +correlated channels and the lack of CSI at the users. Secondly, +we generalized the proposed framework to accommodate two +alternative coexistence strategies: puncturing (PUNC) and +superposition coding (SPC). The former prevents the inter- +service interference aiming to protect the URLLC reliability, +while the latter accepts it aiming to maintain the eMBB +service. Thirdly, we numerically evaluated the performance +achieved by PUNC and SPC under different precoding and +power allocation schemes, and subject to different configu- +rations of the time-division duplex radio frame and URLLC +random activation pattern. Simulation results revealed that +the spatial degrees of freedom available at the BSs, when +fully exploited by interference-suppression-based precoding +schemes, and upon a high-quality CSI acquisition, enable to +significantly resolve the multi-user interference caused by the +SPC operation, providing way higher eMBB SE than PUNC, +yet ensuring similar great levels of error probability for the +URLLC. However, whenever these conditions does not hold, +e.g., when a severe pilot contamination degrades the channel +estimates or the degrees of freedom are not sufficient to +handle the interference between many users, PUNC turns to +be a necessary operation to preserve the URLLC performance, +although it might cause eMBB service outage. Unlike prior +works wherein the URLLC performance is inappropriately +assessed by using the outage capacity analysis or the error +probability obtained by the normal approximation, in this work +the finite-blocklength information-theoretic analysis relies on +mismatched receivers and on the saddlepoint approximation +which is proper of URLLC scenarios in massive MIMO +operation. This work can be extended by including mas- +sive machine-type communication (mMTC) in the coexis- +tence strategies, and by including the study of the uplink +in the proposed generalized framework. Finally, investigating +the non-orthogonal multiplexing of heterogeneous services +in distributed user-centric systems, such as cell-free massive +MIMO [34]–[36], able to provide user’s proximity, macrodi- +versity and ubiquitous connectivity, is certainly an appealing +future research direction. +REFERENCES +[1] IMT Vision – Framework and overall objectives of the future develop- +ment of IMT for 2020 and beyond, ITU-R Std. M.2083-0, 2015. +[2] P. Popovski, K. F. Trillingsgaard, O. Simeone, and G. Durisi, “5G wire- +less network slicing for eMBB, URLLC, and mMTC: A communication- +theoretic view,” IEEE Access, vol. 6, pp. 55 765–55 779, 2018. +[3] T. L. Marzetta, “Noncooperative cellular wireless with unlimited num- +bers of base station antennas,” IEEE Trans. Wireless Commun., vol. 9, +no. 11, pp. 3590–3600, 2010. +[4] T. L. Marzetta, E. G. Larsson, H. Yang, and H. Q. Ngo, Fundamentals +of Massive MIMO. +Cambridge University Press, 2016. +[5] E. Bj¨ornson, J. Hoydis, and L. Sanguinetti, “Massive MIMO networks: +Spectral, energy, and hardware efficiency,” Foundations and Trends® in +Signal Processing, vol. 11, no. 3-4, pp. 154–655, 2017. +[6] P. Popovski, J. J. Nielsen, ˇC. Stefanovi´c, E. d. Carvalho, E. Str¨om, K. F. +Trillingsgaard, A. Bana, D. M. Kim, R. Kotaba, J. Park, and R. B. +Sørensen, “Wireless access for ultra-reliable low-latency communica- +tion: Principles and building blocks,” IEEE Network, vol. 32, no. 2, pp. +16–23, Mar. 2018. +[7] A.-S. Bana, E. de Carvalho, B. Soret, T. Abr˜ao, J. C. Marinello, +E. G. Larsson, and P. Popovski, “Massive MIMO for internet of +things (IoT) connectivity,” Physical Communication, vol. 37, p. 100859, +2019. [Online]. Available: http://www.sciencedirect.com/science/article/ +pii/S1874490719303891 +[8] J. ¨Ostman, A. Lancho, G. Durisi, and L. Sanguinetti, “URLLC with +massive MIMO: Analysis and design at finite blocklength,” IEEE Trans. +Wireless Commun., vol. 20, no. 10, pp. 6387–6401, Oct. 2021. +[9] E. Bj¨ornson, E. de Carvalho, J. H. Sørensen, E. G. Larsson, and +P. Popovski, “A random access protocol for pilot allocation in crowded +massive MIMO systems,” IEEE Transactions on Wireless Communica- +tions, vol. 16, no. 4, pp. 2220–2234, Apr. 2017. +[10] A. Anand, G. De Veciana, and S. Shakkottai, “Joint scheduling of +URLLC and eMBB traffic in 5G wireless networks,” in Proc. of IEEE +Conference on Computer Communications (INFOCOM), Apr. 2018, pp. +1970–1978. +[11] R. Kassab, O. Simeone, P. Popovski, and T. Islam, “Non-orthogonal +multiplexing of ultra-reliable and broadband services in fog-radio archi- +tectures,” IEEE Access, vol. 7, pp. 13 035–13 049, 2019. +[12] A. A. Esswie and K. I. Pedersen, “Opportunistic spatial preemptive +scheduling for URLLC and eMBB coexistence in multi-user 5G net- +works,” IEEE Access, vol. 6, pp. 38 451–38 463, 2018. +[13] S. F. Abedin, M. G. R. Alam, S. M. A. Kazmi, N. H. Tran, D. Niyato, +and C. S. Hong, “Resource allocation for ultra-reliable and enhanced +mobile broadband IoT applications in fog network,” IEEE Transactions +on Communications, vol. 67, no. 1, pp. 489–502, Jan. 2019. +[14] A. Matera, R. Kassab, O. Simeone, and U. Spagnolini, “Non-orthogonal +eMBB-URLLC radio access for cloud radio access networks with analog +fronthauling,” Entropy, vol. 20, no. 9, 2018. +[15] M. Alsenwi, N. H. Tran, M. Bennis, A. Kumar Bairagi, and C. S. +Hong, “eMBB-URLLC resource slicing: A risk-sensitive approach,” +IEEE Communications Letters, vol. 23, no. 4, pp. 740–743, Apr. 2019. +[16] R. Abreu, T. Jacobsen, G. Berardinelli, K. Pedersen, N. H. Mahmood, +I. Z. Kovacs, and P. Mogensen, “On the multiplexing of broadband traffic +and grant-free ultra-reliable communication in uplink,” in Proc. of IEEE +Vehicular Technology Conference (VTC-Spring), Apr. 2019, pp. 1–6. +[17] E. N. Tominaga, H. Alves, R. D. Souza, J. Luiz Rebelatto, and +M. Latva-aho, “Non-orthogonal multiple access and network slicing: +Scalable coexistence of eMBB and URLLC,” in Proc. of IEEE Vehicular +Technology Conference (VTC-Spring), Apr. 2021, pp. 1–6. +[18] F. Saggese, M. Moretti, and P. Popovski, “Power minimization of +downlink spectrum slicing for eMBB and URLLC users,” IEEE Trans. +Wireless Commun., vol. 21, no. 12, pp. 11 051–11 065, Dec. 2022. +[19] M. Almekhlafi, M. A. Arfaoui, C. Assi, and A. Ghrayeb, “Joint resource +and power allocation for URLLC-eMBB traffics multiplexing in 6G +wireless networks,” in Proc. IEEE Int. Conf. on Commun. (ICC), Jun. +2021, pp. 1–6. +[20] J. Zeng, T. Lv, R. P. Liu, X. Su, Y. J. Guo, and N. C. Beaulieu, “Enabling +ultrareliable and low-latency communications under shadow fading by +massive MU-MIMO,” IEEE Internet Things J., vol. 7, no. 1, pp. 234– +246, Jan. 2020. +[21] H. Ren, C. Pan, Y. Deng, M. Elkashlan, and A. Nallanathan, “Joint pilot +and payload power allocation for massive-MIMO-enabled URLLC IIoT +networks,” IEEE J. Sel. Areas Commun., vol. 38, no. 5, pp. 816–830, +May 2020. +[22] A. A. Nasir, H. D. Tuan, H. Q. Ngo, T. Q. Duong, and H. V. Poor, +“Cell-free massive MIMO in the short blocklength regime for URLLC,” +IEEE Trans. Wireless Commun., vol. 20, no. 9, pp. 5861–5871, Sep. +2021. +[23] Y. Polyanskiy, H. V. Poor, and S. Verdu, “Channel coding rate in the +finite blocklength regime,” IEEE Trans. Inf. Theory, vol. 56, no. 5, pp. +2307–2359, May 2010. +[24] J. Scarlett, A. Martinez, and A. Guill´en i F`abregas, “Mismatched +decoding: Error exponents, second-order rates and saddlepoint approxi- +mations,” IEEE Trans. Inf. Theory, vol. 60, no. 5, pp. 2647–2666, May +2014. +[25] A. Martinez and A. Guill´en i F`abregas, “Saddlepoint approximation of +random-coding bounds,” in Proc. of Inf. Theory and Applicat. Workshop +(ITA), Feb. 2011, pp. 1–6. +[26] W. Yang, G. Durisi, T. Koch, and Y. Polyanskiy, “Quasi-static multiple- +antenna fading channels at finite blocklength,” IEEE Trans. Inf. Theory, +vol. 60, no. 7, pp. 4232–4265, Jul. 2014. +[27] G. Durisi, T. Koch, J. ¨Ostman, Y. Polyanskiy, and W. Yang, “Short- +packet communications over multiple-antenna rayleigh-fading channels,” +IEEE Trans. Commun., vol. 64, no. 2, pp. 618–629, Feb. 2016. +[28] J. ¨Ostman, G. Durisi, E. G. Str¨om, M. C. Coc¸kun, and G. Liva, +“Short packets over block-memoryless fading channels: Pilot-assisted + +ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO +19 +or noncoherent transmission?” IEEE Trans. Commun., vol. 67, no. 2, +pp. 1521–1536, Feb. 2019. +[29] D. Tse and P. Viswanath, Fundamentals of wireless communication. +Cambridge University Press, 2005. +[30] A. Lapidoth and S. Shamai, “Fading channels: how perfect need ”perfect +side information” be?” IEEE Trans. Inf. Theory, vol. 48, no. 5, pp. 1118– +1134, May 2002. +[31] R. G. Gallager, Information Theory and Reliable Communication. New +York, NY, U.S.A: John Wiley & Sons, 1968. +[32] 3GPP, Further advancements for E-UTRA physical layer aspects (Re- +lease 9). +3GPP TS 36.814, Mar. 2017. +[33] 3rd Generation Partnership Project 3GPP, Service requirements for +cyber-physical control applications in vertical domains. +3GPP TS +22.104 v. 17.2.0, Dec. 2019. +[34] S. Buzzi and C. D’Andrea, “User-centric communications versus cell- +free massive MIMO for 5G cellular networks,” in WSA 2017; 21th +International ITG Workshop on Smart Antennas. +VDE, 2017, pp. 1–6. +[35] G. Interdonato, E. Bj¨ornson, H. Q. Ngo, P. Frenger, and E. G. Larsson, +“Ubiquitous cell-free massive MIMO communications,” EURASIP J. +Wireless Commun. and Netw., vol. 2019, no. 1, p. 197, 2019. +[36] S. Buzzi, C. D’Andrea, A. Zappone, and C. D’Elia, “User-centric 5G +cellular networks: Resource allocation and comparison with the cell- +free massive MIMO approach,” IEEE Trans. Wireless Commun., vol. 19, +no. 2, pp. 1250–1264, Feb. 2020. + diff --git a/GNE1T4oBgHgl3EQf_AZD/content/tmp_files/load_file.txt b/GNE1T4oBgHgl3EQf_AZD/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..f12e41d5057804ee6f772f747256d7532c39ed7f --- /dev/null +++ b/GNE1T4oBgHgl3EQf_AZD/content/tmp_files/load_file.txt @@ -0,0 +1,1445 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf,len=1444 +page_content='1 This work has been submitted to the IEEE for possible publication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Copyright may be transferred without notice, after which this version may no longer be accessible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='03575v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='IT] 9 Jan 2023 On the Coexistence of eMBB and URLLC in Multi-cell Massive MIMO Giovanni Interdonato, Member, IEEE, Stefano Buzzi, Senior Member, IEEE, Carmen D’Andrea, Member, IEEE, Luca Venturino, Senior Member, IEEE, Ciro D’Elia, and Paolo Vendittelli Abstract—The non-orthogonal coexistence between the en- hanced mobile broadband (eMBB) and the ultra-reliable low- latency communication (URLLC) in the downlink of a multi- cell massive MIMO system is rigorously analyzed in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' We provide a unified information-theoretic framework blending an infinite-blocklength analysis of the eMBB spectral efficiency (SE) in the ergodic regime with a finite-blocklength analysis of the URLLC error probability relying on the use of mismatched decoding, and of the so-called saddlepoint approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Punc- turing (PUNC) and superposition coding (SPC) are considered as alternative downlink coexistence strategies to deal with the inter-service interference, under the assumption of only statistical channel state information (CSI) knowledge at the users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' eMBB and URLLC performances are then evaluated over different precoding techniques and power control schemes, by accounting for imperfect CSI knowledge at the base stations, pilot-based estimation overhead, pilot contamination, spatially correlated channels, the structure of the radio frame, and the characteristics of the URLLC activation pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Simulation results reveal that SPC is, in many operating regimes, superior to PUNC in providing higher SE for the eMBB yet achieving the target reliability for the URLLC with high probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Moreover, PUNC might cause eMBB service outage in presence of high URLLC traffic loads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' However, PUNC turns to be necessary to preserve the URLLC performance in scenarios where the multi-user interference cannot be satisfactorily alleviated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Index Terms—Enhanced Mobile Broadband, Error Probabil- ity, Massive MIMO, Mismatched Decoding, Network Availability, Non-Orthogonal Multiple Access, Puncturing, Saddlepoint Ap- proximation, Spectral Efficiency, Superposition Coding, Ultra- Reliable Low-Latency Communications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' INTRODUCTION W ITH the advent of the mobile application ecosystem and the resulting increase of the data-processing and storage capabilities of the smart devices, several heterogeneous services have emerged setting various stringent communication requirements in terms of data rates, latency, reliability and massive connectivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' These requirements and related use cases have been summarized by the 3rd Generation Partnership Project (3GPP) into three macro services, namely enhanced This work was supported by the Ministero delle Imprese e del Made in Italy (former MISE) within the project “Smart Urban Mobility Management” (5G-SUMMA), Asse II, Supporto alle Tecnologie Emergenti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Interdonato, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Buzzi, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' D’Andrea, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Venturino and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' D’Elia are with the Department of Electrical and Information Engineering, University of Cassino and Southern Latium, 03043 Cassino, Italy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' They are also affiliated with Consorzio Nazionale Interuniversitario per le Telecomunicazioni (CNIT), 43124 Parma, Italy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Vendittelli is with TIM S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', 20133 Milan, Italy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Buzzi is also affiliated with Politecnico di Milano, 20133 Milan, Italy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Corresponding author: Giovanni Interdonato.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' mobile broadband (eMBB), ultra-reliable low-latency com- munications (URLLC) and massive machine-type communica- tions (mMTC) [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' eMBB services require high-peak data-rate and stable connectivity, and include most of the everyday us- age applications: entertainment, multimedia, communication, collaboration, mapping, web-surfing, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' URLLC services require an one-way radio latency of 1 ms with 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='999% success probability, and include real-time and time-critical applications, such as autonomous driving, automation control, augmented reality, video and image processing, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' mMTC services enable connectivity between a vast number of miscel- laneous devices, and include applications such as smart grids, traffic management systems, environmental monitoring, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 5G started to roll out variously as an eMBB service, essentially like a faster version of LTE, whereas mMTC and URLLC requirements continue to be refined and will materialize within the next decade, although some experimen- tal activities are already taking place in many parts of the world1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Academic research and industrial standardization is currently interested at different coexistence mechanisms for such heterogeneous services, apparently moving apart from the initial vision of a sliced network [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Slicing the network basically means allocating orthogonal resources (storage, com- puting, radio communications, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=') to heterogeneous services so that to guarantee their mutual isolation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This approach is, in broad sense, generally known as orthogonal multiple access (OMA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' As an interesting alternative to orthogonal resource allocation, non-orthogonal OMA (NOMA) is gaining increasing importance especially with respect to the allocation of the radio access network (RAN) communication resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The conventional approach to slice the RAN is to separate eMBB, mMTC, and URLLC services in time and/or frequency domains, whereas NOMA relies on efficient coexistence strate- gies wherein heterogeneous services share the same time- frequency resources, being separated in the power and spatial domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In this regard, the terminology Heterogeneous OMA (H-OMA) is often adopted [2] to distinguish the orthogonal resource allocation of heterogeneous services from that of the same type, referred to as OMA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (The same distinction applies to H-NOMA with respect to NOMA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=') Massive MIMO [3]–[5] is a technology that uses a very large number of co-located antennas at the base stations (BSs) to coherently and simultaneously serve multiple users over 1See, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', the funding programs from the Italian former Ministry of Economic Development, as well as those of other European Countries, the EU, USA, China and Japan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO 3 the same radio resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The users are multiplexed in the spatial domain by using beamforming techniques that enable high-directivity transmission and reception.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The use of many antennas also triggers the favorable propagation which further reduces the multi-user interference and the channel hardening which reduces the random fluctuations of the effective channel gain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' As a consequence, there is no need to adopt intri- cate signal processing techniques to deal with the multi-user interference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Such an aggressive spatial multiplexing along with the intrinsic practicality and scalability of the massive MIMO technology leads to high levels of energy and spectral efficiency, spatial diversity, link reliability and connectivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The primary focus of the massive MIMO research has been on increasing the user data rates, thereby targeting the eMBB requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Lately, some studies have highlighted the significant benefits that massive MIMO is able to provide to URLLC [6]–[8] by reducing the outage and error probability, and therefore increasing the link reliability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Higher reliability results to less retransmissions which, in turn, translates to a lower latency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' mMTC also benefits from massive MIMO technology [7], [9] by capitalizing on the high energy effi- ciency to increase devices’ battery lifetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Besides, favorable propagation enables an aggressive spatial multiplexing of the mMTC devices, facilitating the detection and the random access procedures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' RELATED WORKS Coexistence between heterogeneous services has been ini- tially studied in systems wherein a single-antenna BS serves multiple heterogeneous users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In [2], Popovski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' proposed a first tractable communication-theoretic model that captures the key features of eMBB, URLLC and mMTC traffic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (These features are summarized in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=') Specifically, [2] analyzes two scenarios for a single-cell model: (i) slicing for URLLC and eMBB, and (ii) slicing for mMTC and eMBB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The downlink multiplexing of URLLC and eMBB is studied in [10] with the goal of maximizing the utility of the eMBB traffic while satisfying the quality of service requirements of the URLLC traffic, and by abstracting the operation at the physical layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Coexistence mechanisms between URLLC and eMBB traffic, based on the puncturing technique, have been proposed in [11] for the uplink of a multi-cell network wherein a simplified Wyner channel model with no fading was assumed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' As for multi-user MIMO systems, in [12] a null-space-based spatial preemptive scheduler for joint URLLC and eMBB traffic is proposed for cross-objective optimization, where the critical URLLC quality-of-service (QoS) is guaranteed while maximizing the eMBB ergodic capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The spatial degrees of freedom at the BS are leveraged to fulfill the URLLC decoding requirements without jeopardizing the performance of the eMBB users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' A similar study but for a distributed setup was conducted in [13] where a joint user association and resource allocation problem is formulated for the downlink of a fog network, considering the coexistence of URLLC and eMBB services for internet-of-things (IoT) applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' An analytic hierarchy process was proposed for setting the priorities of the services and to formulate a two-sided matching game where a stable association between the fog network infrastructure and IoT devices is established.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The coexistence between eMBB and URLLC is of most interest [14]–[18], and is mainly handled with three alternative techniques, herein listed in descending order of complexity: successive interference cancellation (SIC), with which the receiver iteratively decode and remove the contribu- tions of a specific service from the cumulative received signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This approach requires that the receiver has access to the channel state information (CSI) to be able to perform the multi-stage decoding, with decreasing lev- els of interference, to the required successful decoding probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' puncturing (PUNC), consisting in preventing the inter- service interference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In the downlink, whenever the trans- mitter has to transmit a URLLC signal, then the eMBB signals are dropped over the channel uses involved by th URLLC transmission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In the uplink, the receiver uses an erasure decoder to discard the eMBB signals, provided that it is able to detect the presence of URLLC transmis- sions, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', via energy detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' superposition coding (SPC), with which the transmitter simply sends a linear combination of eMBB and URLLC signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' At the receiver, both for the uplink and the downlink, the inter-service interference is treated as un- correlated noise (TIN).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Again, this approach requires the receiver to be able to detect the presence of the undesired transmissions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In [14] the coexistence of URLLC and eMBB services in the uplink of a C-RAN architecture with shared analog fronthaul links is analyzed, accounting for SIC, puncturing, and TIN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This work provides an information-theoretic study in the performance of URLLC and eMBB traffic under both H-OMA and H-NOMA, by considering standard cellular models with additive Gaussian noise links and a finite inter-cell interfer- ence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The main conclusions are that NOMA achieves higher eMBB rates with respect to H-OMA, while guaranteeing reliable low-rate URLLC communication with minimal access latency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Moreover, H-NOMA under SIC is seen to achieve the best performance, while, unlike the case with digital capacity-constrained fronthaul links, TIN always outperforms puncturing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' A similar analysis is conducted in [11] including both uplink and downlink of C-RAN without analog fronthaul but considering practical aspects, such as fading, the lack of CSI for URLLC transmitters, rate adaptation for eMBB transmitters and finite fronthaul capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Abreu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' in [16] analyzes both the H-OMA and H-NOMA options for eMBB traffic, and grant-free URLLC in the uplink accounting for minimum mean square error (MMSE) receivers with and without SIC, and under the assumption of Rayleigh fading channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The resulting outage probability and achievable rates show that TIN is mostly beneficial in sufficiently high- SNR regime when SIC is employed or, in some cases, with low URLLC load.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Otherwise, H-OMA supports higher loads for both services simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Recently,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [17] proposed an approach to improve the supported loads for URLLC in the uplink,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' for both H-OMA and H-NOMA in presence of eMBB ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO 4 TABLE I FEATURES OF THE 5G USE CASES eMBB URLLC mMTC characteristics high rate,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' moderate reliability low latency,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' ultra reliability,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' low rate low rate,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' large connectivity traffic large payload,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' several devices small payload,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' few devices small payload,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' massive devices activation pattern stable intermittent intermittent time span long,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' multiple resources short,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' slot long,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' multiple resources frequency span single/multiple resources multiple resources,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' diversity single resource scheduling to prevent access collision for high reliability infeaseable random access if needed to support intermittency fundamental target maximize data rate meet latency and reliability requirements maximize supported arrival rate reliability requirement ∼ 10−3 ∼ 10−5 ∼ 10−1 applications video streaming,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' augmented reality,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' entertainment connected factories,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' traffic safety,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' au- tonomous vehicles,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' telemedicine internet of things,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' low-power sensors,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' smart cities traffic,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' showing the superiority of H-NOMA in ensuring the reliability requirements of both the services.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' A similar analysis but for the downlink is conducted in [18], [19] where optimal resource allocation strategies and H-NOMA are combined to satisfy the eMBB and URLLC QoS constraints, under the assumption of perfect eMBB CSI and statistical URLLC CSI knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The information-theoretic framework used by the afore- mentioned works to characterize the performance achieved by eMBB and URLLC users cannot be applied to massive MIMO scenarios, for different reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Establishing the rate (or the spectral efficiency) of the eMBB users in the ergodic (infinite- blocklength) regime, upon the block-fading channel model, is sound as the eMBB codewords spans an infinite number of independent fading realizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Nevertheless, as per the per- formance of the URLLC users in a quasi-static fading scenario, the use of the outage capacity, whose analysis includes infinite- blocklength assumptions, leads to an inaccurate evaluation of the error probability, as demonstrated in [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In addition, outage capacity analyses do not capture the effects of the CSI acquisition overhead when pilots are used to estimate the uplink channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' As an alternative, finite-blocklength analyses have been proposed for URLLC in conventional cellular networks [18], [19], co-located massive MIMO networks [20], [21] and cell-free massive MIMO networks [22], and rely on the information-theoretic bounds and tools developed in [23], e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', the well known normal approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' However, the work in [8] proved that the normal approximation is not accurate in the region of low error probabilities of interest in URLLC (<10−4), especially as the number of antennas at the BS increases, and in presence of imperfect CSI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Impor- tantly, ¨Ostman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' in [8] provided a more rigorous finite- blocklength information-theoretic framework relying on the use of a mismatched decoding [24], and of the saddlepoint approximation [25] for evaluating the error probability of the URLLC users in co-located massive MIMO systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This framework, priory developed for wireless fading channels in [26]–[28], accounts for linear signal processing, imperfect CSI and instantaneous channel estimation error, and additive uncorrelated noise including multi-user interference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' However, the analysis of [8] is limited to the URLLC regime, and the coexistence with the eMBB is yet to be investigated under a unified information-theoretic framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' CONTRIBUTIONS Our contributions can be summarized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' We investigate the non-orthogonal multiplexing of the eMBB and the URLLC, in the downlink of a multi- cell massive MIMO system, by providing a uni- fied information-theoretic framework that combines an infinite-blocklength analysis to assess the SE of the eMBB and a finite-blocklength analysis to assess the error probability of the URLLC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Unlike prior works wherein the URLLC performance is inappropriately evaluated by the use of the outage capacity analysis or the error probability obtained via the normal approximation, in this work the finite-blocklength information-theoretic analysis relies on the results and tools established in [8], where mismatched receivers and saddlepoint approximation are assumed, but the coexistence between and URLLC and eMBB was not investigated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The proposed unified framework accommodates two al- ternative coexistence strategies: PUNC and SPC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The former prevents the inter-service interference to protect the URLLC reliability, whereas the latter accepts it to maintain the eMBB service.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In addition, the analytical framework accounts for imperfect CSI acquisition at the BSs via uplink pilot transmissions, pilot contamination and pilot overhead, spatially correlated channels and the lack of CSI at the users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' We numerically evaluate the performance achieved by PUNC and SPC under different precoding schemes, namely maximum ratio, regularized zero-forcing and multi-cell MMSE, and different power allocation strate- gies, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', equal power allocation, weighted fractional power allocation and optimal power allocation maxi- mizing the product SINR throughout the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The coexistence between eMBB and URLLC is explored in various scenarios, including different configurations of the time-division duplex radio frame, and different URLLC random activation patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The results of our comprehensive simulation campaign highlight the clear superiority of SPC over PUNC in most of the considered operating regimes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The main limitation of SPC, namely the caused multi-user inter- ference, is often overcome by using regularized zero- forcing and multi-cell MMSE, which in turn hinge on a ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO 5 high-quality CSI acquisition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Whenever these precoding techniques cannot be implemented due to complexity or hardware constraints, the URLLC reliability requirements can be met by fine-tuning the parameters of the pro- posed weighted fractional power allocation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Conversely, performing PUNC is necessary to preserve the URLLC performance if the interference cancellation via precoding is ineffective, for instance, when pilot contamination is high or the multi-user interference is excessive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Pilot contamination among URLLC users is particularly destructive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This led us to devise a pilot assignment policy that prioritizes the URLLC users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In our approach, we primarily assign unique orthogonal pilots to the URLLC users, admitting pilot reuse only among eMBB users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' If doable, orthogonal pilots are assigned within cells to prevent the intra-cell pilot contamination, and if the uplink training length is sufficiently large, then mutually orthogonal pilots are guaranteed to everyone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' PAPER OUTLINE The remainder of this paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In Sec- tion II, we introduce the system model of the multi-cell massive MIMO system, including the description of the uplink training and a unified framework for the data transmission stage accounting for both puncturing and superposition cod- ing techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In Section III we present the information- theoretic analyses in the infinite-blocklength regime and finite- blocklength regime for the eMBB and the URLLC perfor- mance evaluation, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Section IV details the precod- ing techniques and power allocation strategies to deal with the coexistence of eMBB and URLLC users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Simulation results and discussions are provided in Section V, while the main findings of this work are discussed in Section VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' NOTATION Vectors and matrices are denoted by boldface lowercase and boldface uppercase letters, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Calligraphy uppercase letters denote sets, while C and R represent the sets of complex and real numbers, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' E {·} indicates the expectation operator, while Pr {·} denotes the probability of a set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' x+ represents the positive part function, namely x+ =max{x, 0}, and ⌊·⌋ denotes the floor function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The natural logarithm is indicated by log(·) and Q(·) describes the Gaussian Q- function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' CN (µ, Σ) describes a circularly symmetric complex Gaussian distribution with mean µ and covariance matrix Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The superscripts (·)T, (·)∗ and (·)H denote the transpose, the conjugate and the conjugate transpose (Hermitian) operators, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' tr(A) indicates the trace of the matrix A, while ∥a∥ denotes the ℓ2-norm of the vector a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The notation [A]:,i indicates the ith column of the matrix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' IN represents the identity matrix of size N×N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Table II introduces the notation definition used in the system model of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' SYSTEM MODEL Let us consider a multi-cell massive MIMO system with L cells, each one served by a BS that is placed at the cell- center and equipped with M co-located antennas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Each cell TABLE II SYSTEM MODEL NOTATION Symbol Description Symbol Description L n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' of cells K n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' of users/cell M n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' of BS antennas Ku n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' of URLLC users/cell α Ku/K ∈ (0, 1) Ke n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' of eMBB users/cell τc TDD frame length Ku j URLLC users set in cell j τp UL training length Ke j eMBB users set in cell j τd DL data trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' length T n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' of slots in a TDD frame hj lk channel between BS j and user k in cell l, vector CM �hj lk estimate of hj lk �hj lk estimation error hj lk−�hj lk Rj lk correl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' matrix of hj lk βj lk average channel gain of �hj lk Cj lk correl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' matrix of �hj lk f pilot reuse factor pp jk UL pilot power ρmax j max transmit power at BS j Pjk set of all the users using the same pilot as user k in cell j At jk 1 if URLLC user k in cell j is active in slot t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 0 otherwise au parameter of the Bernoulli distribution that draws At jk ςe jk[n] data transmitted by BS j to eMBB user k in channel use n ςu ji[n] data transmitted by BS j to URLLC user i in channel use n wjk precoding vector,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' CM,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' used by BS j to its user k σ2 u UL noise variance ρu ji DL power to URLLC user i σ2 d DL noise variance ρe jk DL power to eMBB user k gli jk precoded DL channel from BS l using wli to user k in cell j �gli jk estimate of gli jk nd URLLC codeword length ϵdl jk DL error probability ηdl DL network availability ν exponent characterizing the fractional power allocation (FPA) ω FPA weight tuning the power allocated to the URLLC users covers a square area of D × D km2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' and provide service to K users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' It holds that M ≫ K so that interference suppression can be efficiently carried out by exploiting the spatial degrees of freedom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' A fraction 0≤α≤1 of the K users requests a URLLC service, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', a vehicle in cellular vehicle- to-everything (C-V2X) use cases for intelligent transportation systems, or a machine in factory automation use cases for “Industry 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Letting Ku = αK be the number of URLLC users per cell, then Ke = K −Ku is the number of eMBB users per cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The set including the indices of the eMBB and URLLC users in cell j is denoted as Ke j and Ku j , respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' TDD PROTOCOL AND FRAME STRUCTURE The considered system operates in time-division duplex (TDD) mode to facilitate CSI acquisition and limit the es- timation overhead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In addition, we assume that the channel is reciprocal as a result of a perfect calibration of the RF chains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' By leveraging the channel reciprocity, the channel estimates acquired by the BS in the uplink are then utilized in the downlink to design the transmit precoding vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' As channel hardening holds for co-located massive MIMO systems with sufficiently large antenna arrays in most of the propagation environments, we assume that the users do not estimate the downlink channels, and reliably decode downlink data solely relying on the knowledge of the statistical CSI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Hence, the TDD protocol consists of three phases: (i) pilot-based uplink training, (ii) uplink data transmission, and (iii) downlink data transmission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The time-frequency resources are structured in TDD frames, each one grouping a set of subcarriers and time samples over which the channel response is assumed being frequency-flat and time-invariant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The TDD frame must accommodate the ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO 6 URLLC eMBB URLLC eMBB PUNC or = + SPC one slot, nd τd DL Data Transmission UL training τp Coherence bandwidth Bc one channel use Coherence time, Tc Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' An illustration of the TDD frame assuming no uplink data transmission phase, and representing the resource allocation in case of puncturing (PUNC) and superposition coding (SPC) operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' aforementioned protocol phases and supporting all the users, thus its size is designed to match that of the smallest user’s coherence block in the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 1, the TDD frame consists of τc = TcBc samples (or channel uses) where Tc is the coherence time and Bc is the coherence bandwidth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' τp channel uses out of τc are spent for the uplink CSI acquisition, whereas the remaining channel uses are devoted to the uplink and downlink data transmission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Since, in this paper, we only focus on the downlink operation, we assume that τd = τc −τp is the length of the downlink data transmission phase, without loss of generality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The latter is divided in T slots of equal length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' As conventionally assumed in the ergodic regime, an eMBB transmission spans multiple (theoretically an infinite number of) TDD frames, wherein the channel realizations evolve independently according to the block-fading model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' To evaluate the spectral efficiency achieved by the eMBB users, we look at a single TDD frame and resort to the information-theoretic bounds and tools in the infinite-blocklength regime [4], [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Whereas, URLLC transmissions are confined in time to meet the very strict latency requirements and are allowed to span only one slot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Hence, the number of channel uses in a slot equals the URLLC codeword length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' We assume a random activation pattern of the URLLC users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Within a TDD frame, a URLLC user may be active in multiple slots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' To characterize the error probability of the URLLC transmissions, we look separately at each single slot of a TDD frame and resort to the finite-blocklength information-theoretic bounds and tools presented in [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' CHANNEL MODEL AND UPLINK TRAINING The channel response between the k-th user in cell l and the BS in cell j is denoted by the M-dimensional complex- valued vector hj lk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' We assume correlated Rayleigh fading channels, that is hj lk ∼ CN � 0M, Rj lk � , where Rj lk ∈ CM×M is the positive semi-definite spatial correlation matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The corresponding average channel gain (or large-scale fading coefficient) is given by βj lk = tr(Rj lk)/M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Large-scale fading quantities are assumed to be known at the BS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In the uplink training phase, each user transmits a pilot sequence that spans τp channel uses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The pilot sequence of user k in cell j is denoted by φjk ∈ Cτp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' All the pilot sequences are drawn from a set of τp mutually orthogonal pilots, thereby the inner product between two pilots equals either τp if the sequences are identical or 0 if they are mutually orthogonal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Notice that re-using the pilots throughout the network might be unavoidable as the share of the TDD frame reserved to the training is limited and, importantly, as the CSI acquisition overhead significantly degrades the spectral efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Pilot reuse gives rise to additional interference, known as pilot contamination [3], that degrades the quality of the acquired CSI and correlates the channel estimates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The cumulative uplink signal received at BS j, denoted by Yp j ∈ CM×τp, reads Yp j = K � k=1 � pp jkhj jkφT jk + L � l=1 l̸=j K � i=1 � pp lihj liφT li + Np j , (1) where pp jk is the transmit pilot power, and Np j is the additive receiver noise with i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' elements distributed as CN � 0, σ2 u � , with σ2 u being the receiver noise variance in the uplink.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' To estimate the channel of user k in its own cell, hj jk, BS j correlates Yp j with the known pilot sequence φjk as yp jjk =Yp jφ∗ jk = � pp jkτphj jk + K � i=1 i̸=k � pp jihj jiφT jiφ∗ jk + L � l=1 l̸=j K � i=1 � pp lihj liφT liφ∗ jk + Np jφ∗ jk .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (2) In (2), the second term of the rightmost right-hand side represents the intra-cell pilot contamination term, while the third term quantifies the inter-cell pilot contamination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' A conventional pilot allocation strategy consists in assigning mutually orthogonal pilots to users within the same cell, and re-using the pilot sequences over different cells [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This is a reasonable choice as intra-cell pilot contamination is presumably stronger than inter-cell pilot contamination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' We let τp = fK where f is referred to as pilot reuse factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Importantly, in order not to jeopardize the ultra-reliability of the URLLC transmissions, we assume that unique orthogonal pilot sequences are assigned to all the URLLC users in the network, if doable (namely when τp >LKe).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Summarizing, the pilot allocation strategy we propose primarily aims to prevent URLLC users from being affected of pilot contamination, and secondarily to prevent intra-cell pilot contamination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Finally, if τp is sufficiently large, that is τp ≥ LK, then mutually orthogonal pilots can be guaranteed to everyone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Let us define the set Pjk = � (l, i) : φli =φjk, l=1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' , L, i=1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' , K � , (3) including the indices of all the users (and of the corresponding cells) that use the same pilot as user k in cell j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Hence, we can rewrite (2) as yp jjk = � pp jkτphj jk + τp � (l,i)∈Pjk\\(j,k) � pp lihj li + Np jφ∗ jk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (4) ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO 7 The processed uplink signal, yp jjk, is a sufficient statistic for the estimation of hj jk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Upon the knowledge of the spatial correlation matrices, BS j can compute the minimum mean- squared error (MMSE) estimate of hj jk, denoted by �hj jk, based on the observation yp jjk as [5] �hj jk = � pp jkRj jkΨj jkyp jjk (5) where Ψj jk = � � � (l,i)∈Pjk pp liτpRj li + σ2 ulIMj � � −1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (6) The estimation error is given by �hj jk = hj jk − �hj jk, and has correlation matrix Cj jk =E � �hj jk(�hj jk)H� = Rj jk−pp jkτpRj jkΨj jkRj jk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' It follows that �hj jk and �hj jk are independent random variables distributed as �hj jk ∼ CN � 0M, Cj jk � , �hj jk ∼ CN � 0M, Rj jk−Cj jk � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' DOWNLINK TRANSMISSION In the downlink transmission phase, each BS transmits payload data to all the active users of its cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Let At jk be a coefficient that equals 1 if a URLLC transmission takes place at the t-th slot for URLLC user k in cell j, and 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This coefficient models the random activation pattern of the URLLC users which follows a Bernoulli dis- tribution with parameter au, At jk ∼ Bern(au).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' To handle the coexistence of eMBB and URLLC users in the downlink, we consider two transmission techniques: (i) puncturing, and (ii) superposition coding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Under puncturing, whenever a URLLC transmission is triggered by a BS in a certain slot, all the eMBB transmissions therein are dropped.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' However, the eMBB service can be still guaranteed in the remaining slots of the frame where no URLLC users are active.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Under superposition coding, eMBB transmissions occur in all the slots and each BS linearly combines eMBB and URLLC signals whenever URLLC transmissions are triggered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The analytical framework detailed next is generalized, namely holds for both the aforementioned transmission tech- niques upon setting, for an arbitrary BS j and slot t, the coefficient ˜At j = � � � � � � � � 1 − � i∈Ku j At ji �+ , for puncturing, 1 , for superposition coding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Let ςe jk[n] or ςu jk[n] be the data symbol transmitted by BS j to user k over an arbitrary channel use n, if k is an eMBB user or a URLLC user, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' We assume that ςs jk[n] ∼ CN (0, 1), with s = {e, u}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' A slot consists of nd channel uses, with nd =⌊τd/T⌋, and equals the length of the URLLC codeword.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The data symbol is precoded by using the M-dimensional precoding vector wjk, which is function of the CSI acquired at the BS during the uplink training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' It also holds E � ∥wjk∥2� = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The data signal transmitted by BS j over an arbitrary channel use n of slot t is given by xt j[n] = ˜At j � k∈Ke j � ρe jkwjkςe jk[n]+ � i∈Ku j At ji � ρu jiwjiςu ji[n], (7) with n = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' , nd, and where ρe jk and ρu ji are the downlink transmit powers used by BS j to its eMBB user k and URLLC user i, respectively, satisfying the following per-BS power constraint E ���xt j[n] ��2� = ˜At j � k∈Ke j ρe jk+ � i∈Ku j At jiρu ji ≤ ρmax j , (8) with j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' , L, and where ρmax j is the maximum transmit power at BS j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The data signal received at user k in cell j over an arbitrary channel use n of slot t is denoted as yt,s jk[n], with s = {e, u}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In line with the conventional massive MIMO operation, we assume that the users do not acquire the instantaneous downlink CSI, but rather rely on a mean value approximation of their downlink precoded channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Such approximation is accurate if channel hardening occurs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' If user k in cell j is an eMBB user, namely k ∈ Ke j, then its received data signal over an arbitrary channel use n of slot t can be written as in (9) at the top of the next page, where wjk[n] ∼ CN � 0, σ2 d � is the i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' receiver noise with variance σ2 d, and we have defined gli jk =(hl jk)Hwli, namely the precoded downlink (scalar) channel between the BS in cell l, using the precoding vector intended for its user i, and the k-th user in cell j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' If user k in cell j is a URLLC user, its received data signal over an arbitrary channel use n in slot t can be written as in (10) at the top of the next page.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Equation (9) emphasizes the fact that user k in cell j solely knows the statistical CSI of the downlink channel, that is E � gjk jk � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The second term in (9) represents the self-interference due to this lack of instantaneous CSI, referred to as beamforming gain uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Going forward, the intra-cell inter-service inter- ference and intra-cell intra-service interference terms represent the interference caused by the URLLC and eMBB users of cell j, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This is presumably stronger than the inter- cell interference caused by the eMBB users (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', intra-service) and the URLLC users (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', inter-service) in the other cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' A similar distinction of the various signal contributions is reported in (10) for URLLC user k in cell j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In this case, the lack of instantaneous CSI at the user will be highlighted in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' PERFORMANCE ANALYSIS In this section, we evaluate the downlink performance of eMBB and URLLC users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' As per the eMBB users, we consider the spectral efficiency (SE) by applying the infinite- blocklength information-theoretic results established in the ergodic regime [4], [5], [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' An achievable downlink SE, namely a lower-bound on the ergodic downlink capacity, can be obtained by applying the popular hardening bound technique [4], [5] on the signal model in (9), by treating all the interference sources as uncorrelated noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Specifically,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' an ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO 8 yt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='e ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='jk [n] = E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='gjk ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='jk ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='˜At ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='ρe ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='jkςe ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='jk[n] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='desired signal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='gjk ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='jk −E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='gjk ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='jk ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='˜At ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='ρe ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='jkςe ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='jk[n] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='self-interference ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='i∈Ku ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='gji ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='jkAt ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='ji ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='ρu ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='jiςu ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='ji[n] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='intra-cell inter-service interference ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='i∈Ke ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='j\\{k} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='gji ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='jk ˜At ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='ρe ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='jiςe ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='ji[n] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='intra-cell intra-service interference ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='L ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='l=1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='l̸=j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='i∈Ke ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='l ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='gli ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='jk ˜At ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='l ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='ρe ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='liςe ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='li[n] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='inter-cell intra-service interference ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='L ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='l=1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='l̸=j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='i∈Ku ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='l ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='gli ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='jkAt ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='li ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='ρu ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='liςu ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='li[n] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='inter-cell inter-service interference ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='+ wjk[n] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='� �� � ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='noise ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='(9) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='yt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='u jk [n]=gjk jkAt jk � ρu jkςu jk[n] � �� � desired signal + � i∈Ku j \\{k} gji jkAt ji � ρu jiςu ji[n] � �� � intra-cell intra-service interference + L � l=1 l̸=j � i∈Ku l gli jkAt li � ρu liςu li[n] � �� � inter-cell intra-service interference + L � l=1 � i∈Ke l gli jk ˜At l � ρe liςe li[n] � �� � inter-service interference + wjk[n] � �� � noise (10) achievable downlink spectral efficiency of an arbitrary eMBB user k in cell j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' is given by SEe jk = τd τc 1 T T � t=1 log2(1 + SINRt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='e jk),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [bits/s/Hz] ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (11) where τd/τc accounts for the estimation overhead,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' SINRt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='e jk = ˜At jρe jk ���E � gjk jk ���� 2 L � l=1 K � i=1 ϱt li E � |gli jk|2 � − ˜At jρe jk ���E � gjk jk ���� 2 +σ2 d ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (12) is the effective SINR of user k ∈ Ke j,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' where the expectations are taken with respect to the random channel realizations,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' and ϱt li = � At liρu li,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' if i ∈ Ku l ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' ˜At lρe li,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' if i ∈ Ke l .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (13) The expression of the achievable SE shown in (11) holds for any choice of precoding scheme, any channel estimator and any channel distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Importantly, it accounts for any choice of coexistence technique between heterogeneous services, namely puncturing or superposition coding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The infinite-blocklength analysis above is established upon the assumption of block-fading channel model, entailing that each eMBB codeword has infinite length that spans a large number of independent fading realizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This assumption cannot be applied to the URLLC case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' As per the URLLC user, we consider a nonasymptotic analysis of the downlink error probability on a slot basis by applying the finite-blocklength information-theoretic results established in [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Firstly, we rewrite (10) as yt,u jk [n] = gjk jkqjk[n] + zjk[n], n = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' , nd, (14) where qjk[n]=At jk �ρu jkςu jk[n], and zjk[n] = � i∈Ku j \\{k} gji jkqji[n] + � i∈Ke j gji jk ˜At j � ρe jiςe ji[n] + L � l=1 l̸=j � � � i∈Ku l gli jkqli[n] + � i∈Ke l gli jk ˜At l � ρe liςe li[n] � � + wjk[n] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (15) However, URLLC user k in cell j has not access to gjk jk, but performs data decoding by only leveraging its mean value, �gjk jk = E � (hj jk)Hwjk � , which is treated as perfect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This estimate is accurate if channel hardening holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Notice that, the precoded channel gjk jk is frequency-flat and time-invariant over the transmission of the nd-length URLLC codeword in slot t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Moreover, gjk jk remains constant for any other transmission from BS j to user k over slots in the same TDD frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Given all channels and precoding vectors, the effective noise terms {zjk[n] ∈ C;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' n = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' , nd} are random variables conditionally i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' with variance σ2 jk, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', CN � 0, σ2 jk � , given by σ2 jk = � i∈Ku j \\{k} At jiρu ji|gji jk|2 + � i∈Ke j ˜At jρe ji|gji jk|2 + L � l=1 l̸=j � � � i∈Ku l At liρu li|gli jk|2+ � i∈Ke l ˜At lρe li|gli jk|2 � � + σ2 d .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (16) To determine the transmitted codeword qjk = [qjk[1], .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' , qjk[nd]]T , user k in cell j employs a mismatched scaled nearest-neighbor (SNN) decoder [30], with which selects the codeword �qjk from the codebook C by applying the rule �qjk = arg min �qjk∈C ���yt,u jk − �gjk jk�qjk ��� 2 , (17) ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO 9 where yt,u jk =[yt,u jk [1], .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' , yt,u jk [nd]]T ∈Cnd is the received data vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Let ϵdl jk = Pr {�qjk ̸= qjk} be the downlink error probability experienced by the URLLC user k in cell j achieved by the SNN decoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' An upper bound on ϵdl jk is obtained by using the standard random-coding approach [31],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' ϵdl jk ≤Egjk jk � Pr � nd � n=1 ıs(qjk[n],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' yt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='u jk [n]) ≤ log m−1 r ����gjk jk �� ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (18) where m=2b is the number of codewords with length nd that convey b information bits,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' r is a random variable uniformly distributed in the interval [0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 1] and ıs(qjk[n],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' yt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='u jk [n]) is the generalized information density,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' given by ıs(qjk[n],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' yt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='u jk [n]) = −s ���yt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='u jk [n] −�gjk jkqjk[n] ��� 2 + s|yt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='u jk [n]|2 1+sρu jk|�gjk jk|2 + log(1+sρu jk|�gjk jk|2) ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (19) for all s > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In (18) the expectation is taken over the distribution of gjk jk, and the probability is computed with respect to the downlink data symbol {qjk[n]}nd n=1, the effective additive noise {zjk[n]}nd n=1, and the random variable r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The evaluation of the upper bound in (18) entails a very demanding numerical computation to firstly obtain the probability, and then to numerically tighten the upper bound value to the low error probability target of the URLLC use case by optimizing with respect to s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Luckily, we can reliably approximate the right-hand side of (18) in closed form, hence with a significant relief of the computational burden, by using the saddlepoint approximation provided in [8, Th.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The existence of a saddlepoint approximation is guaranteed by the fact that the third derivative of the moment-generating function of −ıs(qjk[n],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' yt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='u jk [n]) exists in a neighborhood of zero delimited by the values ε<0<ε given by [8,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Appendix B] ε = − � (ζb − ζa)2 + 4ζaζb(1 − µ) + ζa − ζb 2ζaζb(1 − µ) ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (20) ε = � (ζb − ζa)2 + 4ζaζb(1 − µ) − ζa + ζb 2ζaζb(1 − µ) ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (21) where ζa = s(ρu jk|gjk jk − �gjk jk|2 + σ2) ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (22) ζb = s 1 + sρu jk|�gjk jk|2 (ρu jk|gjk jk|2 + σ2) ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (23) µ = s2 ���ρu jk|gjk jk|2 + σ2 − (gjk jk) ∗�gjk jkρu jk ��� 2 ζaζb(1 + sρu jk|�gjk jk|2) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (24) The saddlepoint approximation hinges on the cumulant- generating function of −ıs(qjk[n],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' yt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='u jk [n]) given by υ(ε) = log E � e−εıs(qjk[n],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='yt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='u jk [n])� ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (25) on its first derivative υ′(ζ),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' and second derivative υ′′(ζ),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' for all ε ∈ (ε,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' ε) υ(ε) =−ε log(1 + sρu jk|�gjk jk|2) − log(1 + (ζb − ζa)ε − ζaζb(1 − µ)ε2) (26) υ′(ε) =− log(1 + sρu jk|�gjk jk|2) − (ζb − ζa) − 2ζaζb(1 − µ)ε 1 + (ζb − ζa)ε − ζaζb(1 − µ)ε2 (27) υ′′(ε) = � (ζb − ζa) − 2ζaζb(1 − µ)ε 1 + (ζb − ζa)ε − ζaζb(1 − µ)ε2 �2 + 2ζaζb(1 − µ) 1 + (ζb − ζa)ε − ζaζb(1 − µ)ε2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (28) Let m = endR for some strictly positive transmission rate R = (log m)/nd, and let ε ∈ (ε, ε) be the solution to the equation R=−υ′(ε).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Let Is be the generalized mutual infor- mation [30] defined as Is = E {ıs(qjk[1], vjk[1])} = −υ′(0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Lastly, consider the critical rate [31, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='30)] given by Rcr s = −υ′(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Then, we have three possible saddlepoint approximations for the error probability upper bound [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' If ε ∈ [0, 1], then Rcr s ≤ R ≤ Is and Pr � nd � n=1 ıs(qjk[n], yt,u jk [n]) ≤ log endR − 1 r � ≈end[υ(ε)+εR] [Ψnd,ε(ε)+Ψnd,ε(1−ε)] , (29) where Ψnd,ε(ℓ) ≜ e 1 2 ndℓ2υ′′(ε)Q � ℓ � ndυ′′(ε) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (30) If ε > 1, then R < Rcr s and Pr � nd � n=1 ıs(qjk[n], yt,u jk [n]) ≤ log endR − 1 r � ≈ end[υ(1)+R] � �Ψnd(1, 1) + �Ψnd(0, −1) � , (31) where �Ψnd(ℓ1, ℓ2)≜endℓ1[Rcr s −R+ 1 2 υ′′(1)] ×Q � ℓ1 � ndυ′′(1)+ℓ2 nd(Rcr s −R) � ndυ′′(1) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (32) If ε < 0, then R > Is and Pr � nd � n=1 ıs(qjk[n], yt,u jk [n]) ≤ log endR−1 r � ≈ 1−end[υ(ε)+εR] [Ψnd,ε(−ε)−Ψnd,ε(1−ε)] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (33) The saddlepoint approximation is more accurate in the URLLC massive MIMO regime than the conventionally-used normal approximation [23] as the former characterizes the exponential decay of the error probability, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', the error-exponent, as a function of the URLLC codeword length, and the transmission rate requirement R, while uses the Berry-Esseen central-limit theorem (used in the normal approximation) to only charac- terize the multiplicative factor following the error-exponent term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The normal approximation, whose formulation directly involves the generalized mutual information, Is, but does not ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO 10 R, is accurate only when Is is close to R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This operating regime does not hold for URLLC wherein R is typically lower than Is to accomplish the very low error probability targets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Once that the approximate upper bounds on the downlink error probability are obtained via saddlepoint approximation, we compute the downlink network availability [8], ηdl, as ηdl = Pr � ϵdl jk ≤ ϵdl target � (34) which measures the probability that the target error probability ϵdl target is satisfied by an arbitrary user k in cell j, in presence of interfering users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' While the expectation in the error probability definition is taken with respect to the small-scale fading and the effective additive noise, given a large-scale fading real- ization, the probability in the network availability definition is computed with respect to the large-scale fading (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', path loss, shadowing etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The expression of the network availability shown in (34) holds for any choice of precoding scheme, any channel estimator and any channel distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Importantly, it accounts for any choice of coexistence technique between heterogeneous services, namely puncturing or superposition coding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' PRECODING AND POWER CONTROL The choice of the precoding scheme and of the downlink power allocation deeply affects the SE of the eMBB users and the network availability for the URLLC users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' For the sake of comparison, we herein consider three precoding schemes and three power allocation strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The general expression for the precoding vector intended for user k in cell j is given by wjk = vjk ∥vjk∥ , (35) where the denominator serves to make the average power of the precoding vector unitary, and vjk is next characterized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Multi-cell MMSE (M-MMSE): vM−MMSE jk = � � � L � l=1 �Hj l Pl( �Hj l )H+Υj+σ2 uIM �−1 �Hj jPj � � :,k where Pl =diag(pli, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' , plK)∈RK×K is the matrix with the uplink transmit powers of all the users in cell l as diagonal elements, Υj = �L l=1 �K i=1 pliCj li, and �Hj l = [�hj l1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' �hj lK].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' M-MMSE precoding provides a nearly optimal downlink SE but requires each BS to acquire CSI and statistical CSI of all the users of the multi-cell system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Moreover, the computation of the precoding vector, which entails inverting a matrix M ×M, may be demanding for large BS arrays.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Although impractical, M-MMSE precoding will serve as benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Regularized zero-forcing (RZF): vRZF jk = � �Hj j � ( �Hj j)H �Hj j + σ2 uP−1 j �−1� :,k .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Compared to M-MMSE, RZF precoding requires each BS to estimate the channels of only its users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Moreover, computing the RZF precoding vector is computationally cheaper since the size of the matrix to be inverted is K×K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' However, RZF does only suppress the intra-cell interference while, unlike M- MMSE, does not provide to the users any protection mech- anism against inter-cell interference and channel estimation error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Maximum Ratio (MR): vMR jk = �hj jk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' It is computationally the cheapest but performance-wise the worst precoding scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' MR only aims at maximizing the power of the desired signal, providing no interference-suppression mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' MR will serve as lower bound on the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Properly allocating the downlink power can make all the difference to meet the strict reliability requirements of the URLLC and to improve the SE of the eMBB users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Next, we provide three power allocation schemes that take into account the power budget at the BSs, the adopted eMBB-URLLC coexistence strategy and the URLLC activation pattern, which is known at the BS in the downlink operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Equal power allocation (EPA): It consists in setting ρu ji = ρmax j At ji ˜At jKe + � k∈Ku j At jk , i ∈ Ku j (36) ρe jk = ρmax j ˜At j ˜At jKe + � i∈Ku j At ji , k ∈ Ke j (37) to satisfy the per-BS power constraint in (8) with equality and allocate the same share of power to each user, regardless of its channel conditions and its service requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Weighted fractional power allocation (FPA): it consists in setting the powers as ρu ji = ωρmax j At ji(βj ji) ν (1−ω) ˜At j � k∈Ke j (βj jk) ν + ω � u∈Ku j At ju(βj ju) ν ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' i ∈ Ku j (38) ρe jk = (1 − ω)ρmax j ˜At j(βj jk) ν (1−ω) ˜At j � e∈Ke j (βj je) ν + ω � i∈Ku j At ji(βj ji) ν ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' k ∈ Ke j (39) where the weight ω ∈ (0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 1) adjusts the amount of downlink power to be allocated to the URLLC users,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' while ν establishes the power control policy as a function of the average channel gain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' An opportunistic power allocation is attained by setting ν > 0, with which more power is allocated to the users with better channel conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Conversely, fairness is supported by setting ν < 0, with which more power is allocated to the users with worse channel conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' If ω ∈ (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5, 1) a larger share of power is allocated to the URLLC users rather than to the eMBB users, whereas it is the other way around if ω ∈ (0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Notice that, if ν = 0 and ω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5, then the FPA reduces to the EPA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Optimal power allocation (OPA) for max product SINR: ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO 11 The powers are the solution of the optimization problem maximize {ρs jk} L � j=1 K � k=1 SINRt,s jk (40a) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' K � k=1 ϱt jk ≤ ρmax j , ∀j, (40b) where the superscript s = e if user k ∈ Ke j, s = u otherwise, and ϱt jk is given in (13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Without further entangling the notation in (40), we remark that the SINR of inactive users is fictitiously set to 1 to preserve the optimization problem formulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This power allocation strategy treats all the users as eMBB users, hence it would be optimal if there would be no URLLC users active in a given slot, by maximizing a lower bound on the sum SE of the multi-cell system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Although the SINR expression in (12) is meaningless when applied to a URLLC user, we can still heuristically plug the URLLC powers resulting from (40) into the error probability analysis and motivate this approach by looking at the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' All the considered power allocation schemes, in principle, run on a slot-basis in order to adapt the power coefficients to the URLLC activation pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Fortunately, these schemes only rely on the knowledge of the statistical CSI which allows to pre-compute some power coefficients or to keep the power allocation for multiple slots/frames in case of no macroscopic changes in the propagation environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Unlike the EPA and the FPA schemes, the OPA scheme requires a certain degree of cooperation among the BSs which must send statistical CSI to let a central processing unit (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', a master BS) compute the SINR of all the users and solve the optimization problem, and feed them back with the power coefficients to use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This would introduce intolerable delay for the URLLC users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Moreover, solving problem (40), although efficiently as a geometric program [5, Th.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2], is unlikely to be doable within a time- slot, especially for crowded networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Hence, the OPA scheme is of limited practical use, but will serve for benchmarking purposes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' SIMULATION RESULTS In this section, we present and discuss the results of our simulations in which the coexistence of eMBB and URLLC is deeply analyzed under different setups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Specifically, we shed the light on the impact of different factors on the performance, such as the transmission technique and the precoding scheme, the power control strategy, the imperfect CSI and estimation overhead, the pilot contamination, the length and number of slots in a TDD frame, and the characteristics of the URLLC activation pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Our simulation scenario consists of a multi-cell massive MIMO system with L = 4 cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Each cell covers a nominal area of 500×500 squared meters, and is served by a BS, placed at the cell center, equipped with a uniform linear array (ULA) with M = 100 equispaced half-wavelength antenna elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' A wrap-around topology is implemented as in [5, Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The users are dropped uniformly at random over the coverage area but at a minimum distance of 25 m from the BS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In addition, we assume that the URLLC users are distributed uniformly at random in an area of 125×125 squared meters that surrounds the BS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' A random realization of the user locations determines a set of large-scale fading coefficients and constitutes a snapshot of the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' For a given network snapshot the achievable downlink SEs of the active eMBB users are computed according to (11), while the downlink error probabilities of the URLLC users are obtained according to the approximations (29)-(33).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The cumulative distribution function (CDF) of the SE and the network availability are then drawn over many network snapshots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The channel correlation matrices are generated according to the popular local scatter- ing spatial correlation model [5, Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6], and we assume that the scattering is only localized around the users and uniformly distributed at random with delay spread 25◦ degrees [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The average channel gain is obtained according to the non-line- of-sight macro cell 3GPP model for 2 GHz carriers [32], and given in dB by βk =−35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='3 − 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 log10 � dk 1 m � + Fk for an arbitrary user k placed at a distance dk from its BS, and where Fk ∼ N � 0, σ2 sh � models the log-normal shadowing as an i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' random variable with standard deviation σsh = 4 dB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The transmission bandwidth is 20 MHz, and the receiver noise power equals -94 dBm both for the uplink and the downlink.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Moreover, we let ρmax j =46 dBm, j =1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' , L, and the uplink transmit power, both for pilot and payload data, be 23 dBm for all the users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' We assume that the URLLC packet consists of b=160 bits, yielding a transmission rate R=b/nd, which is suitable for factory automation use cases, such as motion controls, and in line with the low latency requirements [33, Annex A].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Lastly, without loss of generality, we set τu =0 as we only focus on the downlink performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Unless otherwise stated, we consider TDD frames with length τc = 580 channel uses, given by Tc = 2 ms and Bc = 290 kHz, which supports user mobility up to 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='50 km/h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In the first set of simulations we consider the following setup: K = 20, α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, au = 10−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5, τp = 80 (no pilot contamination), T = 5 slots of length nd = 100 channel uses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2 we plot the CDFs of the achievable downlink SE per “active” eMBB user obtained for different precoding and power allocation strategies, both for superposition coding (top subfigure) and puncturing technique (bottom subfigure).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Under these assumptions, SPC is greatly superior than PUNC, precoding and power allocation strategies being equal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' M- MMSE with OPA gives, as expected, the best SE but EPA per- forms almost equally well, regardless of the precoding scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' RZF provides a practical excellent trad-off between M-MMSE and MR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' These results suggest that we are approximately operating in an interference-free scenario, thanks to the full and partial interference-suppression mechanism provided by M-MMSE and RZF, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' As per the FPA strategy, in these simulations we have selected ν = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 to promote an opportunistic power allocation and ω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 to prioritize the URLLC users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Such a choice does not favor the eMBB users and justify the worst performance of FPA among the considered strategies when SPC is applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO 12 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' CDFs of the achievable downlink SE per active eMBB user, for different transmission, precoding and power allocation strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: K =20, α=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, au =10−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5, τp =80, T =5, nd =100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' CDFs of the achievable downlink sum SE per cell, for different transmission, precoding and power allocation strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: K = 20, α=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, au =10−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5, τp =80, T =5, nd =100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Same conclusions hold for the results shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 3 where the CDFs of the corresponding sum SE per cell are illustrated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In these figures, we mainly emphasize the eMBB service outage likely occurring when PUNC is adopted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' We define the eMBB service outage, under PUNC operation, as ς out = Pr � � � � k∈Ke j SEe jk = 0 � � � , j =1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' , L , where the probability is computed with respect to the large- scale fading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This probability for a BS to provide no service in a TDD frame to its eMBB users depends on the activation pattern of the URLLC users and the number of slots per frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' We will discuss this aspect in detail later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Under the SPC M-MMSE RZF MR 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 1 PUNC M-MMSE RZF MR 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 1 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Network availability for different transmission, precoding and power allocation strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: K = 20, α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, au = 10−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5, τp = 80, T =5, nd =100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Downlink per-user error probability for different transmission and precoding strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: EPA, K =20, α=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, au =10−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5, τp =80, T =5, nd =100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' settings considered in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 3, the eMBB service outage is quite significant as amounts to about 30%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 4 we move to the URLLC performance by showing the downlink network availability achieved when ϵdl target = 10−5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Despite the interference caused by the eMBB users when SPC is performed, both M-MMSE and RZF are able to provide levels of network availability close to one, in line with PUNC, revealing a great ability of suppressing the interference and supporting high reliability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Conversely, MR provides poor performance in SPC when EPA or OPA (which is optimal for the eMBB users) schemes are used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Notice that, our choice for the parameters of the FPA scheme pays off for the combination SPC/MR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The network availability values shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 4 are obtained by the error probabilities whose CDFs are illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' To better understand its meaning, the network availability is given by the cross-point between the CDF of the per-user error probability and the vertical line representing the error probability target value, as Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 5 highlights (blue circle markers).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' From this set of simulations, we conclude that SPC is clearly superior to PUNC in terms of SE yet providing very high network availability, when M-MMSE or RZF are carried out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' If MR is the only viable option (for instance due to strict complexity or hardware constraints), then SPC with FPA, upon properly setting the design parameters ν and ω, is an effective choice to keep the network availability high while preventing 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 SPC0 1 2 3 4 per-user SE [bit/ 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 PUNC 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 2 per-user SE [bit/s5 6 7 8 s/Hzl MR RZF M-MIMSE FPA,V=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='w=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 EPA --OPA 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 4 s/Hzl0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5SPC0 0 10 20 30 40 50 per-cell sum SE [bi 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 PUNC eMBB service outage 0 0 10 20 30 per-cell sum SE [bi60 70 80 90 /s/Hz) MR RZF M-MMSE FPA,V=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='W=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 EPA OPA 40 50 60 /s/HzlPUNC/M-MMSE PUNC/RZF 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 PUNC/MRT SPC/M-MMSE SPC/RZFutage error robabilityCDF SPC/MRT 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4 FPA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' v=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' w=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 M-MMSE andRZF 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2 0 10-50 10-40 10-30 10-2 DL error probabtarget URLLC service MRT 20 10-10 10-5 100 ilityON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO 13 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Average per-user SE achieved by SPC with FPA, for different precoding schemes and values of ν, ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The average is taken over 200 network snapshots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: K = 20, α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, au = 10−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5, τp = 80, T = 5, nd =100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' any eMBB service outage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In this regard, we now focus on how to select ν and ω appropriately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' By using the same settings as in the first set of simulations, in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 6 we plot the average per-user SE assuming SPC and different precoding schemes with FPA as ν and ω vary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' From the eMBB user perspective, it is preferable setting a small value for ω, and ν in the interval [−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5, 0].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' While the former is trivial, the latter needs further discussions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Indeed, recall that positive values for ν enable allocating more power to users with better channel conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Since we assume the URLLC users are uniformly distributed in a smaller area surrounding the BSs, it is very likely that they are closer to the BS than most of the eMBB users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Therefore, negative values for ν increase the fairness and improve eMBB users performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Large values for both ω and ν excessively unbalance the power distribution in favor of the URLLC users, degrading the SE of the eMBB users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Conversely, small values for both ω and ν break down the network availability of the URLLC users in SPC operation, as clearly seen in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Nevertheless, both M-MMSE and RZF are able to provide levels of network availability close to 1 except when ν = −1, while MR is quite sensitive to this parameters tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Suppressing the multi-user interference is of a vital importance when SPC is adopted, and RZF, although not dealing with the inter-cell interference, is an excellent trade-off between performance and practicality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Fine- tuning the parameters of the FPA scheme yields satisfying performance when using MR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' FPA becomes a valid, heuristic alternative to combat the multi-user interference whenever the latter cannot be removed by the precoding technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Setting ω becomes pointless when using PUNC with FPA as only URLLC transmissions take place in the considered slot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Hence, in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 8 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 9 we focus on the average SE per user and the network availability as only ν varies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' For both cases we notice that an equal power allocation, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', ν =0, is desirable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' As per the SE of the eMBB users, negative values of ν support Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Network availability achieved by SPC with FPA, for different precoding schemes and values of ν, ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: K = 20, α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, au =10−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5, τp =80, T =5, nd =100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Average per-user SE (with 95% confidence interval) achieved by PUNC with FPA, for different precoding schemes and values of ν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The average is taken over 200 network snapshots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: K =20, α=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, au =10−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5, τp =80, T =5, nd =100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' lower SEs (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', the 95%-likely SE per user), hence the fairness among the users, while large positive values of ν support the peak SE in a greedy fashion, neglecting lower SEs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Therefore, ν =0 is sound if the average SE is targeted, especially when the multi-user interference is partially or fully canceled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' As per the network availability of the URLLC users, any choice of ν ∈ [−1, 1] is solid as long as M-MMSE or RZF are employed, while the performance of MR is relatively penalized whenever a non-neutral choice for ν is taken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Presumably, the number of URLLC users simultaneously active in the same slot (resulting from the chosen values of α and au) is such that the multi-user interference is not significant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Next, we evaluate the performance as a function of the number of the slots in a TDD frame, T, and the size of the slot, nd, which in turn determines the URLLC codeword length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In this set of simulations and hereafter, we omit the results achieved by MR and only consider FPA with ν = 0 and ω = α motivated by the previous results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 10 shows the CDFs of the sum SE per cell, for three different setups: St 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 4uperpositionCoding3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 2 RZF 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 MR 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='95 1 0M-MMSE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 1 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 VSuperposition Coding 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 M-MMS 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 MR 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2 1E RZF 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO 14 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='95 1 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Network availability achieved by PUNC with FPA, for different precoding schemes and values of ν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: K =20, α=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, au =10−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5, τp =80, T =5, nd =100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 0 10 20 30 40 50 60 70 80 90 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 1 0 10 20 30 40 50 60 70 80 90 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 1 0 10 20 30 40 50 60 70 80 90 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 1 SPC SPC PUNC PUNC PUNC SPC Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' CDFs of the achievable downlink sum SE per cell, for different transmission and precoding strategies, as the number of slots per frame varies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: FPA with ν = 0 and ω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, K = 20, α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, au = 10−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5, τp =80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (i) nd =25, T =20, (ii) nd =50, T =10, and (iii) nd =100, T = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The structure of the TDD frame has not a significant impact on the SE of the eMBB users when SPC is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Conversely, that deeply affects the per-cell sum SE in case of PUNC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Indeed, increasing the number of slots per frame makes the probability of having eMBB service outage smaller as it increases the opportunities for an eMBB user to find slots with no active URLLC users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This argument is supported by the results in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 10 in which the eMBB service outage equals 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='01, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0725 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2875 when T = 20, T = 10 and T = 5, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' On the other hand, with fewer slots, eMBB users might be active for longer time, thereby experiencing higher SE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This explains the larger variations of the per-cell sum SE as T is decreased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The length of the slot directly affects the performance of the URLLC users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' As we can see in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 11, the network availability increases drastically with the length of the slot (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', the URLLC codeword length).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In fact, the length of the URLLC codeword determines the transmission rate of the URLLC users as R=b/nd, thus the shorter the codeword the Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Network availability, for different transmission and precoding strategies, as the length of the slot varies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: FPA with ν = 0 and ω =0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, K =20, α=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, au =10−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5, τp =80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' higher the rate requirement to be reliably achieved and, in turn, the larger the error probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2 Again, SPC is the technique that overall guarantees the best performance to both the eMBB and URLLC users as its main limitation, namely the caused multi-user interference, is overcome by using interference- suppression-based precoding schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Lastly, although letting the URLLC transmissions span many channel uses is benefi- cial in terms of network availability, the latency requirements impose to localize the transmissions in time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Now, we move our focus on the impact of the pilot contamination and estimation overhead on the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' By fixing the TDD frame length and the number of slots per frame, we vary the length of the uplink training, hence the number of available orthogonal pilots, and the length of each slot accordingly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 12 we show how the average sum SE per cell evolves in different operating regimes with respect to the uplink training length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In these simulations, we assume K = 20, α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, τc = 580 and T = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Small values of τp entails low channel estimation overhead but high levels of pilot contamination which reduces the effectiveness of the precoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Our pilot assignment scheme preserves the performance of the URLLC users by assigning them unique pilots if available, otherwise pilots are assigned randomly and contamination hits any user indiscriminately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The maximum number of URLLC users potentially active in this scenario is, according to the chosen parameter, 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Hence, pilots are assigned randomly when τp = 10 causing both intra- and inter-cell pilot contamination and providing a low sum SE per cell, namely about 30 bit/s/Hz with SPC and less than 10 bit/s/Hz with PUNC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The performance worsens when τp =20 as the eMBB users have to share only 4 orthogonal pilots since the protection mechanism of the URLLC users is now triggered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' As we increase the value of τp, the intra-cell pilot 2The random-coding union bound in (18) defines the error probability as the probability that the average generalized information density is smaller than the transmission rate requirement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8Network Availability, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2 0 RZF/SPC M-MMSE/SPC nd=25, T=20 RZF/PU nd=50,T=10 nd=100, T=5NC M-MMSE/PUNCON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO 15 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Average SE per cell (with 95% confidence interval), for different transmission and precoding strategies, as τp (and nd) varies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The average is taken over 200 network snapshots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: FPA with ν = 0 and ω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, K =20, α=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, au =10−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5, τc =580, T =5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' contamination is primarily reduced by assigning orthogonal pilots to eMBB users of the same cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' If τp ≥32 then intra-cell pilot contamination is prevented and the inter-cell interference among the eMBB users remains the only impairment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The sum SE per cell keep growing up to τp = 80, when all the users in the network are assigned mutual orthogonal pilots and the benefits of having no pilot contamination at all overcome the penalty from increasing the estimation overhead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Trivially, there are no benefits in the channel estimation when further increasing τp, while the estimation overhead turns to be expensive and drastically lowers the sum SE per cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Finally, notice that RZF and M-MMSE provide essentially the same performance when both the intra- and inter-cell pilot contamination occur, because the ability of suppressing the multi-user interference is poor for both the schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' As per the URLLC users, pilot contamination heavily affects the network availability when τp <16, especially when SPC is employed and despite a long slot lowers the rate requirements, as we can observe in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Pilot contamination among URLLC users is destructive mainly because they are likely to be close to the BS and to each other, experiencing strong inter- ference that cannot be resolved when their channel estimates are correlated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Hence, our approach aiming at prioritizing the URLLC users in the pilot assignment is technically sound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In addition, increasing the estimation overhead deeply penalizes the network availability since more resources are subtracted to the data transmission, namely the slot length reduces and, as already explained earlier, the rate requirements of the URLLC users increase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Next we study how the performance are affected by the random activation pattern and the number of potentially active URLLC users per frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 14 shows the average sum SE per cell as au and α vary, assuming different transmission and precoding schemes, and FPA with ν = 0 and ω = α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Notice that, proportionally increasing ω to α is a reasonable Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Network availability, for different transmission and precoding strategies, as τp (and nd) varies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: FPA with ν = 0 and ω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, K =20, α=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, au =10−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5, τc =580, T =5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Average SE per cell, for different transmission and precoding strategies, as au and α vary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The average is taken over 200 network snapshots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: FPA with ν = 0 and ω = α, K = 20, τc = 580, f = 4, T = 5, nd =100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' approach for SPC as more power is allocated to an increasing number of potentially active URLLC users, especially for large values of au.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In these simulations, we assume two TDD frame configurations: (i) f = 4, T = 5, nd = 100, and (ii) f = 3, T = 8, nd = 65 (whose results are instead shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' First, we observe that similar average sum SE per cell can be achieved by adopting the considered TDD frame configurations: pilot contamination is what slightly degrades the performance of the eMBB users when using the second frame configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The performance of PUNC converges to that of SPC when au ≥ 10−2, hence for sparse activation patterns, as expected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Again, the performance gap between RZF and M-MMSE reduces in the second scenario (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 15) as the inter-cell pilot contamination decreases the ability of M-MMSE in suppressing the multi-user interference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' PUNC provides eMBB service outage for large values of au, whereas 70 --.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='--RZF ZH I--- M-MMSE 60 [bit/s/ everyone contam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 50 ellSPCper 40 inter-and intra-cell pilot contar intra-cell pilo inter-cellpilot SE contam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 30 eMBB users eMBB users 20 inter- and 10 PL 0 114 112 110 108 104 100 10 T 20 30 40 60 80 pno pilot contamination JNC 96 86 76 66 56 100 150 200 250 300AL PUNC tam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='everyone 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='95 SPC ility, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='.Network Availal inter-and intra-cell pilot co inter-and intra-cell p contam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' eMBB users inter-cell pilot contam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='85 eMBB users 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='75 114 112 110 108 104 100 10 20 30 40 60 80 KQno pilot contamination .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='----RZF ------ M-MMSE 96 86 76 66 56 100 150 200 250 300f-4,T=5, SPC 70 M-MMSIna = 100bit/s/Hz 60 50 SE per cell [ 40 30 20 10 PUNC 0 100 10-1 10-2 10-3 10-4RZF 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO 16 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Average SE per cell, for different transmission and precoding strategies, as au and α vary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The average is taken over 200 network snapshots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: FPA with ν = 0 and ω = α, K = 20, τc = 580, f = 3, T = 8, nd =65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Network availability, for different precoding strategies, as au and α vary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The average is taken over 200 network snapshots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: SPC and FPA with ν =0 and ω =α, K =20, τc =580.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Two TDD frame configurations are considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' SPC is still able to cancel the URLLC user interference and to provide excellent SEs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Lastly, we observe that if the 80% of the users requests URLLC, then the performance of the eMBB users is reduced of almost one third with respect to the case α=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This result is mainly due to the chosen value of ω in the FPA scheme that aims to favor the URLLC performance as the number of URLLC users increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The performance achieved by the two considered TDD frame configurations appreciably differ in terms of network availability as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 16 for SPC and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 17 for PUNC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In both cases, reducing the length of the slot leads to about a 10% performance loss, while the pilot contamination only concerns the eMBB users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This performance gap is slightly more pronounced when using PUNC because the entire BS power is distributed among the URLLC users causing stronger mutual interference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Overall, the first TDD frame configuration turns to be quite robust to any of the considered transmission Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Network availability, for different precoding strategies, as au and α vary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: PUNC and FPA with ν =0 and ω =α, K =20, τc =580.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Two TDD frame configurations are considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' eMBB service outage, for different precoding strategies, as au and α vary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: PUNC and FPA with ν =0 and ω =α, K =20, τc =580.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Two TDD frame configurations are considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' and precoding strategies, considered random URLLC activa- tion pattern and URLLC user load.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' A final aspect to be analyzed for this set of simulations is how the probability of eMBB service outage varies with au and α when PUNC is adopted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This would complete the picture on which operating points PUNC is an effective choice for the eMBB users too, and importantly, further remark the relevance of properly structuring the TDD frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' As we can see in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 18, the advantage of adopting the TDD frame configuration with T = 8 slots, when using PUNC, consists in better preventing the eMBB service outage than the configuration with T = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' For instance, when au = 10−1 and α=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 or α=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6, partitioning the share of the frame devoted to the data transmission in 8 slots enables to halve the eMBB outage service compared to the case where 5 slots are adopted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Overall, PUNC can compete with SPC only in scenarios with low URLLC traffic loads, upon properly structuring the TDD frame, as long as a moderate eMBB performance loss is tolerated, either in terms of sum SE per cell or of eMBB service outage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' On the other hand, SPC hinges on precoding schemes able to suppress the multi-user interference which, in turn, leverages the spatial degrees of freedom available at the f-3,T= 8 70 SPCnd = 65[bit/s/Hz V- 60 50 SE per cell [ 40 30 20 10 PUNC 0 100 10~1 au 10-2 10-3 10-4RZF 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2Superposition Coding0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='98 Availability, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='96 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='94 M-MMSE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='92 Network 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='86 RZF 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0 10-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='1 f =3, T =8, nd=6 10-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 10-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='24, T = 5, nd = 100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4 3=0Puncturing0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='98 Network Availability, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='96 f 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9 f = 3,T - 8,nd = 65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='86 RZF 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0 10-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='1 10-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2 10-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 10-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, T = 5,nd = 100 M-MMSE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4 α=wf =3,T =eMBB service outage eMBB service outage 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2 0 0 10-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0 du 10-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6 10-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4 10-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2 310-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2 10-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5 10-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0 du 10-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0 10-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO 17 10 20 30 40 50 60 0 20 40 60 80 100 120 SPC SPC PUNC PUNC Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Average SE per cell (with 95% confidence interval), for different transmission and precoding strategies, as K and τc vary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The average is taken over 200 network snapshots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Settings: FPA with ν =0 and ω =0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, α=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, au =10−1, f =3, T =5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' BS and the high accuracy of the acquired CSI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Finally, we evaluate the performance varying the total number of users and the TDD frame length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 19 shows the average sum SE per cell, for different transmission and precoding strategies, as the number of users per cell, K, grows from 10 to 60, and considering two different TDD frame lengths, namely 580 and 300 channel uses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The latter may support a shorter coherence time and a narrower coherence bandwidth as well as a higher user mobility compared to the case with 580 channel uses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' However, a shorter frame entails less resources that can be allocated to the data transmission and uplink training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In these simulations we assume FPA with ν = 0 and ω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2, au = 10−1, T = 5 and pilot reuse factor f = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Moreover, as τp = fK and τc is fixed, for each value of K we have different configurations of uplink training and slot length, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', τp and nd, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' From Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 19 we observe the average sum SE per cell increasing with K, which demonstrates the great ability of SPC with M-MMSE and RZF to spatially multiplex the users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The average sum SE per cell saturates for values of K larger than 60 for τc =580, and around 40 for τc = 300 wherein the channel estimation overhead heavily burden the SE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' PUNC is far inferior to SPC because allocates less resources to the eMBB users and the performance gap increases with K as the number of URLLC users per cell grows proportionally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Therefore, letting K increase makes punctured slots more likely, which not only subtracts resources to the eMBB user reducing its SE but also increases the eMBB service outage, as shown in Table III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Notice that, the eMBB service outage does not change when varying τc as long as T is fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Table III and Table IV show the network availability for different transmission and precoding strategies, and different values of K, also emphasizing how τp and nd vary accordingly to meet the TDD frame length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' In particular, Table III shows the performance achieved by considering τc = 580, while TABLE III NETWORK AVAILABILITY AND EMBB SERVICE OUTAGE, τc =580 K τp nd ηdl ς out SPC PUNC PUNC M-MMSE RZF M-MMSE RZF 10 30 110 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9989 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9966 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9989 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0012 20 60 104 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9988 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9957 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9944 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9906 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0038 30 90 98 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9988 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9950 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9934 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9893 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0225 40 120 92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9969 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9885 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9881 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9819 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0625 50 150 86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9864 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9787 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9790 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9672 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='1050 60 180 80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9807 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9697 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9728 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='9601 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='1737 TABLE IV NETWORK AVAILABILITY AND EMBB SERVICE OUTAGE, τc =300 K τp nd ηdl ς out SPC PUNC PUNC M-MMSE RZF M-MMSE RZF 10 30 54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='7936 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='7683 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='7844 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='7534 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0012 20 60 48 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6786 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6353 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6905 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='6685 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0038 30 90 42 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4796 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='4296 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5646 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='5435 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0225 40 120 36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='1813 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='1457 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='3192 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='3192 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0625 50 150 30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0021 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0250 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0250 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='1050 60 180 24 0 0 0 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='1737 Table IV shows the performance achieved with τc = 300.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The TDD frame with τc = 580 allows to achieve a network availability above 96% up to 60 users per cell (of which 12 are URLLC users) with any of the considered transmission and precoding techniques, meaning that such an amount of resources are sufficient to excellently support the considered URLLC user loads and their activation pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Conversely, the network availability supported by the TDD frame with τc = 300, reported in Table IV, is considerably lower, even close (or equal) to zero for K ≥50, emphasizing how sensitive the network availability is to the length of the TDD frame, hence to the amount of available resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Importantly, we observe the decreasing trend of the network availability as K increases, which for PUNC is milder and mainly due to the shorter URLLC codeword length, but for SPC is severe and mainly due to the increase of the multi-user interference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Indeed, the results in Table IV clearly confirms that PUNC is more robust than SPC when K ≥20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' CONCLUSION In this paper, we considered the non-orthogonal multiplex- ing of heterogeneous services, namely the enhanced mobile broadband (eMBB) and the ultra-reliable low-latency commu- nication (URLLC), in the downlink of a multi-cell massive MIMO system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' eMBB and URLLC have opposite characteris- tics and diverse requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' eMBB transmissions involve a large payload that spans multiple radio frames, and demand for high spectral efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' While, URLLC users intermittently transmit small payloads in a very short time demanding for low latency and successful probability in the order of 10−5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Such a heterogeneity calls for effective resource allocation strategies to let eMBB and URLLC peacefully coexist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Firstly, we provided a unified information-theoretic framework to assess the spectral efficiency (SE) of the eMBB in the infinite- blocklength ergodic regime, and the error probability of the URLLC in the nonasymptotic finite-blocklength regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Both ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO 18 analyses encompass imperfect channel state information (CSI) acquisition at the base stations (BSs) via uplink pilot trans- missions, pilot contamination and pilot overhead, spatially correlated channels and the lack of CSI at the users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Secondly, we generalized the proposed framework to accommodate two alternative coexistence strategies: puncturing (PUNC) and superposition coding (SPC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' The former prevents the inter- service interference aiming to protect the URLLC reliability, while the latter accepts it aiming to maintain the eMBB service.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Thirdly, we numerically evaluated the performance achieved by PUNC and SPC under different precoding and power allocation schemes, and subject to different configu- rations of the time-division duplex radio frame and URLLC random activation pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Simulation results revealed that the spatial degrees of freedom available at the BSs, when fully exploited by interference-suppression-based precoding schemes, and upon a high-quality CSI acquisition, enable to significantly resolve the multi-user interference caused by the SPC operation, providing way higher eMBB SE than PUNC, yet ensuring similar great levels of error probability for the URLLC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' However, whenever these conditions does not hold, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', when a severe pilot contamination degrades the channel estimates or the degrees of freedom are not sufficient to handle the interference between many users, PUNC turns to be a necessary operation to preserve the URLLC performance, although it might cause eMBB service outage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Unlike prior works wherein the URLLC performance is inappropriately assessed by using the outage capacity analysis or the error probability obtained by the normal approximation, in this work the finite-blocklength information-theoretic analysis relies on mismatched receivers and on the saddlepoint approximation which is proper of URLLC scenarios in massive MIMO operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' This work can be extended by including mas- sive machine-type communication (mMTC) in the coexis- tence strategies, and by including the study of the uplink in the proposed generalized framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Finally, investigating the non-orthogonal multiplexing of heterogeneous services in distributed user-centric systems, such as cell-free massive MIMO [34]–[36], able to provide user’s proximity, macrodi- versity and ubiquitous connectivity, is certainly an appealing future research direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' REFERENCES [1] IMT Vision – Framework and overall objectives of the future develop- ment of IMT for 2020 and beyond, ITU-R Std.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2083-0, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [2] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Popovski, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Trillingsgaard, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Simeone, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Durisi, “5G wire- less network slicing for eMBB, URLLC, and mMTC: A communication- theoretic view,” IEEE Access, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 55 765–55 779, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [3] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Marzetta, “Noncooperative cellular wireless with unlimited num- bers of base station antennas,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Wireless Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 9, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 11, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 3590–3600, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [4] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Marzetta, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Larsson, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Yang, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Ngo, Fundamentals of Massive MIMO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Cambridge University Press, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [5] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Bj¨ornson, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Hoydis, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Sanguinetti, “Massive MIMO networks: Spectral, energy, and hardware efficiency,” Foundations and Trends® in Signal Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 11, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 3-4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 154–655, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [6] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Popovski, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Nielsen, ˇC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Stefanovi´c, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Carvalho, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Str¨om, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Trillingsgaard, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Bana, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Kim, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Kotaba, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Park, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Sørensen, “Wireless access for ultra-reliable low-latency communica- tion: Principles and building blocks,” IEEE Network, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 32, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 16–23, Mar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [7] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Bana, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' de Carvalho, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Soret, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Abr˜ao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Marinello, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Larsson, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Popovski, “Massive MIMO for internet of things (IoT) connectivity,” Physical Communication, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 37, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 100859, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Available: http://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='sciencedirect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='com/science/article/ pii/S1874490719303891 [8] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' ¨Ostman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Lancho, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Durisi, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Sanguinetti, “URLLC with massive MIMO: Analysis and design at finite blocklength,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Wireless Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 20, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 6387–6401, Oct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [9] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Bj¨ornson, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' de Carvalho, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Sørensen, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Larsson, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Popovski, “A random access protocol for pilot allocation in crowded massive MIMO systems,” IEEE Transactions on Wireless Communica- tions, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 16, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2220–2234, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [10] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Anand, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' De Veciana, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Shakkottai, “Joint scheduling of URLLC and eMBB traffic in 5G wireless networks,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' of IEEE Conference on Computer Communications (INFOCOM), Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 1970–1978.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [11] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Kassab, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Simeone, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Popovski, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Islam, “Non-orthogonal multiplexing of ultra-reliable and broadband services in fog-radio archi- tectures,” IEEE Access, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 13 035–13 049, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [12] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Esswie and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Pedersen, “Opportunistic spatial preemptive scheduling for URLLC and eMBB coexistence in multi-user 5G net- works,” IEEE Access, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 38 451–38 463, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [13] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Abedin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Alam, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Kazmi, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Tran, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Niyato, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Hong, “Resource allocation for ultra-reliable and enhanced mobile broadband IoT applications in fog network,” IEEE Transactions on Communications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 67, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 489–502, Jan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [14] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Matera, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Kassab, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Simeone, and U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Spagnolini, “Non-orthogonal eMBB-URLLC radio access for cloud radio access networks with analog fronthauling,” Entropy, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 20, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 9, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [15] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Alsenwi, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Tran, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Bennis, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Kumar Bairagi, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Hong, “eMBB-URLLC resource slicing: A risk-sensitive approach,” IEEE Communications Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 23, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 740–743, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [16] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Abreu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Jacobsen, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Berardinelli, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Pedersen, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Mahmood, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Kovacs, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Mogensen, “On the multiplexing of broadband traffic and grant-free ultra-reliable communication in uplink,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' of IEEE Vehicular Technology Conference (VTC-Spring), Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [17] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Tominaga, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Alves, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Souza, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Luiz Rebelatto, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Latva-aho, “Non-orthogonal multiple access and network slicing: Scalable coexistence of eMBB and URLLC,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' of IEEE Vehicular Technology Conference (VTC-Spring), Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [18] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Saggese, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Moretti, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Popovski, “Power minimization of downlink spectrum slicing for eMBB and URLLC users,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Wireless Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 21, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 12, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 11 051–11 065, Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [19] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Almekhlafi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Arfaoui, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Assi, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Ghrayeb, “Joint resource and power allocation for URLLC-eMBB traffics multiplexing in 6G wireless networks,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' IEEE Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' on Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' (ICC), Jun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [20] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Zeng, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Lv, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Su, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Guo, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Beaulieu, “Enabling ultrareliable and low-latency communications under shadow fading by massive MU-MIMO,” IEEE Internet Things J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 7, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 234– 246, Jan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [21] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Ren, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Pan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Deng, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Elkashlan, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Nallanathan, “Joint pilot and payload power allocation for massive-MIMO-enabled URLLC IIoT networks,” IEEE J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Sel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Areas Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 38, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 816–830, May 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [22] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Nasir, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Tuan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Ngo, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Duong, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Poor, “Cell-free massive MIMO in the short blocklength regime for URLLC,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Wireless Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 20, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 5861–5871, Sep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [23] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Polyanskiy, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Poor, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Verdu, “Channel coding rate in the finite blocklength regime,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Theory, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 56, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2307–2359, May 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [24] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Scarlett, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Martinez, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Guill´en i F`abregas, “Mismatched decoding: Error exponents, second-order rates and saddlepoint approxi- mations,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Theory, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 60, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2647–2666, May 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [25] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Martinez and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Guill´en i F`abregas, “Saddlepoint approximation of random-coding bounds,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' of Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Theory and Applicat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Workshop (ITA), Feb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2011, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [26] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Yang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Durisi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Koch, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Polyanskiy, “Quasi-static multiple- antenna fading channels at finite blocklength,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Theory, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 60, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 4232–4265, Jul.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [27] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Durisi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Koch, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' ¨Ostman, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Polyanskiy, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Yang, “Short- packet communications over multiple-antenna rayleigh-fading channels,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 64, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 618–629, Feb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [28] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' ¨Ostman, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Durisi, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Str¨om, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Coc¸kun, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Liva, “Short packets over block-memoryless fading channels: Pilot-assisted ON THE COEXISTENCE OF EMBB AND URLLC IN MULTI-CELL MASSIVE MIMO 19 or noncoherent transmission?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 67, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 1521–1536, Feb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [29] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Tse and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Viswanath, Fundamentals of wireless communication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Cambridge University Press, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [30] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Lapidoth and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Shamai, “Fading channels: how perfect need ”perfect side information” be?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Theory, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 48, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 1118– 1134, May 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [31] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Gallager, Information Theory and Reliable Communication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' New York, NY, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='A: John Wiley & Sons, 1968.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [32] 3GPP, Further advancements for E-UTRA physical layer aspects (Re- lease 9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 3GPP TS 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='814, Mar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [33] 3rd Generation Partnership Project 3GPP, Service requirements for cyber-physical control applications in vertical domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 3GPP TS 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='104 v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content='0, Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [34] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Buzzi and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' D’Andrea, “User-centric communications versus cell- free massive MIMO for 5G cellular networks,” in WSA 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 21th International ITG Workshop on Smart Antennas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' VDE, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [35] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Interdonato, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Bj¨ornson, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Ngo, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Frenger, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Larsson, “Ubiquitous cell-free massive MIMO communications,” EURASIP J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Wireless Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' and Netw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2019, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 1, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 197, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' [36] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Buzzi, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' D’Andrea, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Zappone, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' D’Elia, “User-centric 5G cellular networks: Resource allocation and comparison with the cell- free massive MIMO approach,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' Wireless Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 19, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 1250–1264, Feb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNE1T4oBgHgl3EQf_AZD/content/2301.03575v1.pdf'} diff --git a/HdAzT4oBgHgl3EQfUvz7/content/tmp_files/2301.01274v1.pdf.txt b/HdAzT4oBgHgl3EQfUvz7/content/tmp_files/2301.01274v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..6630b8a6bca6dec0c0212be0c33e1c11af3946cb --- /dev/null +++ b/HdAzT4oBgHgl3EQfUvz7/content/tmp_files/2301.01274v1.pdf.txt @@ -0,0 +1,801 @@ +Activity Detection for Grant-Free NOMA in Massive +IoT Networks +Mehrtash Mehrabi, Student Member, IEEE, Mostafa Mohammadkarimi, Member, IEEE, +and Masoud Ardakani, Senior Member, IEEE +Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, T6G 1H9, Canada +Email: {mehrtash, mostafa.mohammadkarimi, ardakani}@ualberta.ca +Abstract—Recently, grant-free transmission paradigm has been +introduced for massive Internet of Things (IoT) networks to save +both time and bandwidth and transmit the message with low +latency. In order to accurately decode the message of each device at +the base station (BS), first, the active devices at each transmission +frame must be identified. In this work, first we investigate the +problem of activity detection as a threshold comparing problem. +We show the convexity of the activity detection method through +analyzing its probability of error which makes it possible to +find the optimal threshold for minimizing the activity detection +error. Consequently, to achieve an optimum solution, we propose +a deep learning (DL)-based method called convolutional neural +network (CNN)-activity detection (AD). In order to make it more +practical, we consider unknown and time-varying activity rate +for the IoT devices. Our simulations verify that our proposed +CNN-AD method can achieve higher performance compared to the +existing non-Bayesian greedy-based methods. This is while existing +methods need to know the activity rate of IoT devices, while our +method works for unknown and even time-varying activity rates. +Index Terms—Activity detection, IoT, deep learning, NOMA, +massive MIMO. +I. INTRODUCTION +W +IRELESS technology recent advances provide massive +connectivity for machines and objects resulting in the +Internet of Things (IoT) [1]. The demand for the IoT is +expected to grow drastically in the near future with numerous +applications in health care systems, education, businesses and +governmental services [2]–[4]. +As the demand for connectivity in IoT systems is growing +rapidly, it is crucial to improve the spectrum efficiency [5]. +Hence, the non-orthogonal multiple access (NOMA) has been +introduced [6]. To address the main challenges of IoT, including +access collisions and massive connectivity, NOMA allows +devices to access the channel non-orthogonally by either power- +domain [7] or code-domain [8] multiplexing. Meanwhile, this +massive connectivity is highly affected by the conventional +grant-based NOMA transmission scheme, where the exchange +of control signaling between the base station (BS) and IoT +devices is needed for channel access. The excessive signaling +overhead causes spectral deficiency and large transmission +latency. Grant-free NOMA has been introduced to make a +flexible transmission mechanism for the devices and save time +and bandwidth by removing the need for the exchange of +control signaling between the BS and devices. Hence, devices +can transmit data randomly at any time slot without any request- +grant procedure. +In many IoT applications, a few devices become active for a +short period of time to communicate with the BS while others +are inactive [9]. In IoT networks with a large number of nodes +each with a small probability of activity, multiuser detection +(MUD) methods heavily rely on activity detection (AD) prior to +detection and decoding [4], [10]–[13]. For uplink transmission +in IoT systems with grant-free NOMA transmission scheme, +where the performance of MUD can be severely affected by +the multi-access interference, the reliable detection of both +activity and transmitted signal is very challenging and can be +computationally expensive [10], [12]. +There have been many studies in the literature suggesting +compressive sensing (CS) methods for joint activity and data +detection [12]–[16]. Although CS methods can achieve a re- +liable MUD, they only work in networks with sporadic traffic +pattern, and are expensive in terms of computational complexity +[12]. Recently, deep learning (DL) models have observed a lot +of interests in communication systems and more specifically +in signal detection [17]–[19]. A study in [19] suggests to use +DL for activity and data detection, however they consider a +deterministic traffic pattern for the activity which is not valid +in all environments. +In this work, we first formulate the problem of IoT activity +detection as a threshold comparing problem. We then analyze +the probability of error of this activity detection method. +Observing that this probability of error is a convex function +of the decision threshold, we raise the question of finding the +optimal threshold for minimizing the activity detection error. To +achieve this goal, we propose a convolutional neural network +(CNN)-based AD algorithm for grant-fee code-domain uplink +NOMA. Unlike existing CS-based AD algorithms, our solution +does not need to know the exact number of active devices or +even the activity rate of IoT devices. In fact, in our system +model we assume a time-varying and unknown activity rate and +a heterogeneous network. Simulation results verify the success +the proposed algorithm. +The rest of this paper is organized as follows. We present +the system model in Section II. In Section III we formulate the +device AD problem and derive its probability of error. Section +IV introduces our CNN-based solution for device AD. The +simulation results are presented in Section V. Finally, the paper +is concluded in Section VI. +arXiv:2301.01274v1 [eess.SP] 23 Dec 2022 + +f +N +1 +2 +tT + +2 +1 +s +N +s +T + + + +i-th Transmission Frame +(i+1)-th Transmission Frame +Channel +Estimation +Channel +Estimation +Fig. 1: CDMA slotted ALOHA transmission frame +A. Notations +Throughout this paper, (·)∗ represents the complex conjugate. +Matrix transpose and Hermitian operators are shown by (·)T +and (·)H, respectively. The operator diag(b) returns a square +diagonal matrix with the elements of vector b on the main +diagonal. Furthermore, E[·] is the statistical expectation, ˆa +denotes an estimated value for a, and size of set S is shown +by |S|. The constellation and m-dimensional complex spaces +are denoted by D and Cm, respectively. Finally, the circularly +symmetric complex Gaussian distribution with mean vector µ +and covariance matrix Σ is denoted by CN(µ, Σ). +II. SYSTEM MODEL +We consider a code-division multiple access (CDMA) uplink +transmission, where K IoT devices communicate with a single +IoT BS equipped with M receive antennas. This commonly +used model [3], [6], [19], also considers a frame structure for +uplink transmission composed of a channel estimation phase +followed by CDMA slotted ALOHA data transmissions as +shown in Fig. 1. In each frame, let Nf short packets of length +Tt = NsTs, where Ns is the number of symbols per IoT packet +and Ts is the symbol duration. It is assumed that the channel +is fixed during each frame, but it varies from one frame to +another. The channel state information (CSI) is acquired at the +BS during the channel estimation phase. As it is common in +massive machine-type communications (mMTC), we assume +that the IoT devices are only active on occasion and transmit +short data packets during each frame. The activity rate of the +IoT devices is denoted by Pa ∈ [0, Pmax], which is assumed +to be unknown and time-varying from one packet transmission +to another. Let bk ∈ A be the transmitted symbol of the k- +th device and chosen from a finite alphabet A, when the k-th +device is active; otherwise, bk = 0. Consequently, bk can take +values from an augmented alphabet ¯ +A = A ∪ {0}. We also +denote the set of all devices and the set of active devices by +St = {1, 2, . . . , K} and Sa, respectively, where Sa ⊂ St.1 +A unique spreading code is dedicated to each IoT device +which is simultaneously used for the spreading purpose and de- +vice identification. This removes the need for control signaling +associated with IoT device identification. Control signals are +inefficient for short packet mMTC. The spreading sequence for +the k-th IoT device is denoted by ck = [c1,k c2,k · · · cNc,k]T +where ci,k ∈ {−1, +1} and Nc is the spreading factor. To +1For the simplicity of notation, we remove the index of frame and packet. +support a large number of devices, non-orthogonal spreading +sequences are employed; resulting in NOMA transmission. +For a single frame, the complex channel coefficient between +the k-th IoT device and the m-th receive antenna at the BS is +denoted as gm,k. The active IoT device k, k ∈ Sa transmits Ns +symbols denoted by bk = [bk,1, · · · , bk,Ns]T during a packet. +The received baseband signal over Rayleigh flat fading channel +in a single slot of the slotted ALOHA frame at the m-th receive +antenna of the BS is expressed as +Ym = +K +� +k=1 +gm,kckbT +k + Wm, +(1) +where Wm +∈ +CNc×Ns +with wi,j +∼ +CN(0, σ2 +w) and +E[wi,jw∗ +u,v] += +σ2 +wδ[i − u]δ[j − v] is the additive white +Gaussian noise (AWGN) matrix at the m-th receive an- +tenna. The equivalent channel matrix between all IoT devices +and the m-th receive antenna can be expressed as Φm = +[gm,1c1, · · · , gm,KcK] ∈ CNc×K. Thus, the received packet +at the m-th (m = 1, 2, · · · , M) receive antenna is given by +Ym = ΦmB + Wm, +(2) +where B = [b1, · · · , bK]T ∈ DK×Ns. +Let the total set of all IoT devices be decomposed into a +finite number of disjoint groups G1, G2, · · · , GS. Within group +Gj, the power of every IoT device is given by Pj. The +powers of the devices are equal within each group, but differ +from group to group. The fraction of devices in group Gj is +therefore |Gj|/K. It is assumed that Pj is known at the BS. +This configuration captures heterogeneous IoT networks, where +groups of IoT devices capture different phenomenon in a given +geographical area. A single group of IoT devices with equal +power transmission, resulting in a homogeneous network, is +also studied in this paper. +III. PROBLEM FORMULATION +In this section, we present the problem of IoT device AD in +the cases of known CSI at the receiver and in the presence of +sparse or non-sparse transmission. In order to detect the active +devices, it is assumed that the BS is equipped with a match filter +and the precoding matrix and CSI Φm is available. Before AD, +the observation matrix at the m-th receive antenna ym is passed +through the decorrelator to obtain +Ym = ΦH +mYm ∈ CK×Ns. +(3) +In the following, we investigate the details of the AD problem +based on the Gaussian detection to show how a threshold can be +computed to distinguish active IoT devices from inactive ones. +The output of the decorrelator receiver for the m-th receive +antenna is expressed as +Ym = ΦH +mΦmB + ΦH +mWm, += +� +����� +�K +k=1 g∗ +m,1gm,kcT +1 ckbT +k + g∗ +m,1cT +1 Wm +�K +k=1 g∗ +m,2gm,kcT +2 ckbT +k + g∗ +m,2cT +2 Wm +... +�K +k=1 g∗ +m,Kgm,kcT +KckbT +k + g∗ +m,KcT +KWm +� +����� +. +(4) + +Consequently, the received signal from the k-th user at the m-th +receive antenna is +rm +k = ||gm,kck||2 +2bT +k + +K +� +i=1(i̸=k) +g∗ +m,kgm,icT +k cibT +i +g∗ +m,kcT +k Wm, +(5) +where the second and third terms are multi user interference and +additive noise, respectively. Since an IoT device is either active +or inactive for the entire packet transmission, we determine the +activity status of a device based on each received symbol and +then use the results in [20] for spectrum sensing and combine +the obtained results from all Ns symbols. The device AD in +the case of single symbol transmission is studied in [12], and +we follow that to determine the status of each device based on +each received symbol and then combine the results. The j-th +received symbol from device k at receive antenna m, denoted +as rm +k,j, is +rm +k,j =||gm,kck||2 +2bk,j+ +K +� +i=1(i̸=k) +g∗ +m,kgm,icT +k cibi,j + g∗ +m,kcT +k wj, +(6) +where the first term is the main signal, the second term is multi +user interference from other devices, and the third term is the +additive noise. For the sake of simplicity we assume that BPSK +modulation is used, i.e., the transmitted symbols are drawn from +A = {−1, +1} and p(bk,j = +1) = p(bk,j = −1) = 1/2. The +multi user interference plus noise in rm +k,j has variance +σ2 +k,j = var +� +K +� +i=1(i̸=k) +g∗ +m,kgm,icT +k cibi,j + g∗ +m,kcT +k wj +� += +K +� +i=1(i̸=k) +|g∗ +m,kgm,icT +k ci|2Pa + ||g∗ +m,kcT +k ||2 +2. +(7) +Now we can approximate rm +k,j by a Gaussian distribution +as N(||gm,kck||2 +2, σ2 +k,j) [20]. In order to identify the activity of +device k, our goal is to propose an algorithm to define threshold +τ and set device k as active if |rm +k,j| > τ. Then the probability +of error, Pe, is computed as +P k,j +e +=Pap(|rm +k,j| < τ|bk,j ̸= 0) ++ 2(1 − Pa)p(|rm +k,j| > τ|bk,j = 0), +(8) +where we have p(rm +k,j|bk,j ̸= 0) ∼ N(||gm,kck||2 +2, σ2 +k,j) and +p(rm +k,j|bk,j = 0) ∼ N(0, σ2 +k,j). We can rewrite (8) as +P k,j +e += 2(1 − Pa)Q( τ +σk,j +) + PaQ(||gm,kck||2 +2 − τ +σk,j +), +(9) +where Q(x) = (1/ +√ +2π) +� ∞ +x exp(−t2/2)dt denotes the Gaus- +sian tail function. The probability of error in (9) is a convex +function of τ and hence, a fine tuned neural network is capable +of solving this problem and detect the active devices by finding +the optimum τ. In the following section, we define our DL- +based algorithm to find the optimum τ and minimize the +probability of error. +IV. DL-BASED AD +Device AD is the first step toward effective MUD in a grant- +free uplink multiple access. The recent studies on AD suggest to +use CS methods to identify the set of active devices [14], [15]. +However, these methods fail in the practical scenarios, where +the activity rate is time-varying and/or unknown. Moreover, +these methods are mainly effective for low device activity rate +scenarios, i.e., when sparsity level is high [14]. In this section, +we propose our AD algorithms called CNN-AD by employing a +CNN for heterogeneous IoT networks. By employing a suitably +designed CNN, the underlying pattern in device activity can be +easily learnt. +A. CNN-AD Algorithm +Fig. 2 illustrates the structure of the proposed CNN-AD algo- +rithm. As seen, it is composed of there blocks: 1) preprocessing, +2) CNN processing, and 3) hypothesis testing. +In the preprocessing step after sequence matched filtering, we +first sort the observation matrix from all M receive antennas +in a 3D Tensor as +R = +� +���� +� +P ¯Y1 +� +� +P ¯Y2 +� +... +� +P ¯YM +� +� +���� +(10) +where PYm ∈ CK×Ns, Ym = ΦH +mYm ∈ CK×Ns for +m += +1, 2, · · · , M, and P +≜ +diag(p1, · · · , pK), pk +∈ +{1/P1, · · · , 1/PS} for k = 1, 2, · · · , K. +In the CNN processing block, the 3D Tensor R is used +as the input of a suitable designed CNN. The CNN models +benefit from the convolutional layers performing convolution +operations between matrices instead of multiplication. Thus, it +leads to dimension reduction for feature extraction and provides +a new input to the next network layers which includes only +the useful features of the original high-dimensional input. The +IoT device AD can be formulated as a binary classification or +regression problem. Formulating device AD as a classification +problem is straightforward, but it requires the accurate design +of the CNN’s structure and parameters. +In the hypothesis testing block, the K outputs of the CNN’s +Sigmoid layer is compared with a predefined threshold to +determine the activity status of the IoT devices in the network. +If the k-th node of the Sigmoid layer exceeds the threshold, +the k-th IoT device is identified as active. +B. Training Phase +In order to train the designed CNN, we define the activity +vector a as +a = [a1 a2 +· · · +aK]T , +(11) +where ak is 1 when the k-th IoT device is active and 0 +otherwise. We train our model with N independent training +samples (R +(j),a(j)), where j = 1, 2, · · · , N and a(j) and +R +(j) are the activity vector and observation matrix of the +j-th training sample, respectively. Our objective is to train +the designed CNN to generate the desired output vector a(j) + +Preprocessing +] +, +, +, +[ +2 +1 +M +Y +Y +Y + +Received Message +at M Antennas + + + + + + + + + + + + + +] +[ +] +[ +] +[ +2 +1 +M +Y +P +Y +P +Y +P +R + +CNN +Input +M +K +s +N +CONV 3*3, +stride=3, +pad=same +128 kernels +128 +MAX_POOL, +2*2, +stride=2, +2 +M +2 +K +FC + +1024 +ReLU +FC + +K +Sigmoid +Hypothesis Testing + +? +5.0 + +a +S +128 +M +K +Fig. 2: Model structure of the proposed CNN-AD algorithm +for input matrix R +(j). The model tries to learns non-linear +transformation Ψ such that +ˆa(j) = Ψ(R +(j); Θ), +(12) +where Θ is the set of parameters learned during the training +by minimizing the loss function. The output of model, i.e. +ˆa determines the activity probabilities of the IoT devices. +Here since there are two classes (active or inactive) for each +IoT device, the loss function is chosen as the binary cross- +entropy. For each training sample j, the binary cross-entropy +loss function compares the probability that the IoT devices are +active (ˆa(j)) with the true activity vector a(j) as +Loss(Θ) = 1 +N +N +� +j=1 +− +� +a(j) log(ˆa(j))+(1−a(j)) log(1−ˆa(j)) +� +, +(13) +where log(·) performs an element-wise log operation on ˆa(j), +and the vector multiplication is also element-wise. +V. EXPERIMENTS +In this section, we evaluate the performance of the proposed +CNN-AD algorithm through various simulation experiments +and compare it with some of the existing methods. +A. Simulation Setup +We consider an IoT network with K devices where K > Nc +and pseudo-random codes are used as the spreading sequences +for IoT devices. The probability of activity Pa is considered +to be unknown and time-varying from one packet to another +in the range of Pa ∈ [0, Pmax], where Pmax = 0.1. The +BPSK modulation is used for uplink transmission. Without +loss of generality, the channel coefficient between IoT devices +and the BS is modeled as independent zero-mean complex +Gaussian random variables with variance σ2 +k,m = 1, k ∈ St +and m ∈ {1, · · · , M}. The additive white noise is modeled as +zero-mean complex Gaussian random variables with variance +σ2 +w, and the signal-to-noise ratio (SNR) in dB is defined as +γ ≜ 10 log(σ2 +s /σ2 +w), where σ2 +s = PaPt is the average transmit +power with Pt = �K +k=1 pk as the total transmission power. +Unless otherwise mentioned, we consider spreading sequences +with spreading factor Nc = 32. +In order to train CNN-AD, we generate 105 independent +samples and use 80% for training and the rest for validation +and test. Adam optimizer [21] with learning rate of 10−3 is +used to minimize cross-entropy loss function in (13). +0 +2 +4 +6 +8 +10 +12 +14 +16 +18 +20 +SNR +10 +3 +10 +2 +10 +1 +AER +OMP (Uniform Power) +AMP (Uniform Power) +CNN_Based (Uniform Power) +OMP (non-Uniform Power) +AMP (non-Uniform Power) +CNN_Based (non-Uniform Power) +Fig. 3: Achieved BER with MMSE with a priory AD of OMP, AMP, +and CNN-AD without knowing the number of active devices. +0.1 +0.2 +0.3 +0.4 +0.5 +0.6 +0.7 +0.8 +Activity Rate +10 +2 +10 +1 +BER +OMP (Uniform Power) +AMP (Uniform Power) +CNN-AD (Uniform Power) +Fig. 4: Impact of Pa on the performance of different methods as the +priory AD for MMSE in terms of achieved BER. +B. Simulation Results +1) Performance Evaluation of CNN-AD: We assess CNN- +AD through various simulations and compare it with the exist- +ing CS-based methods including orthogonal matching pursuit +(OMP) [22] and approximate message passing (AMP) [23]. +The impact of SNR on the activity error rate (AER) achieved +by different AD algorithms in both homogeneous and hetero- +geneous IoT networks with uniform and non-uniform power +allocation is shown in Fig. 3. The AER of different methods +are compared for a wide range of SNRs in an IoT system with +total K = 40 IoT devices and a single BS with M = 100 +receive antennas. As expected, the AER of all AD algorithms +decreases with increasing SNR. However, CNN-AD achieves + +IoT Device +Model +Precision +Recall +F1-score +OMP +28% +32% +30% +Device A +AMP +31% +35% +33% +CNN-AD +73% +92% +81% +OMP +33% +32% +32% +Device B +AMP +38% +35% +36% +CNN-AD +100% +83% +91% +TABLE I: Performance analysis different algorithms for two typical +IoT devices for Pmax = 0.1 at γ = 10 dB. +the best performance since unlike the non-Bayesian greedy +algorithms OMP and AMP, our method relies on the statistical +distributions of device activities and channels and exploit them +in the training process. +Fig. 4 illustrates the effect of activity rate on the bit error +rate (BER) for minimum mean square error (MMSE)-MUD +with different AD algorithms at γ = 10 dB SNR. As seen, +as the activity rate increases, the number of active devices +also increases accordingly and thus it becomes difficult to +detect all the active devices. This results in a higher BER. We +use Pmax = 0.1 to train CNN-AD. Thus, the MMSE-MUD +with CNN-AD shows performance degradation for the activity +rates of larger than Pmax = 0.1. However, it still outperforms +the performance of the MMSE-MUD with OMP and AMP +AD algorithms. It should be mentioned that this performance +improves when CNN-AD is trained for a larger value of Pmax. +We further investigate the AD algorithms in terms of other +metrics for two typical IoT devices for Pmax = 0.1 at γ = 10 +dB SNR, presented in Table I. In this table we compare the +precision, recall, and F1-score, defined in [24], achieved by +CNN-AD with OMP and AMP AD algorithms. As seen, all +metrics are improved by using CNN-AD. +VI. CONCLUSIONS +In this paper, we consider the problem of AD in IoT networks +in grant-free NOMA systems. Based on the application, IoT +devices can be inactive for a long period of time and only active +in the time of transmission to the BS. Hence, identifying the +active devices is required for an accurate data detection. Some +studies propose CS-based method for AD. However, high level +of message sparsity is necessary for those methods. In order +to remove this need and exploit the statistical properties of the +channels we propose a CNN-based method called CNN-AD to +detect active IoT devices. Comparison with available methods +shows the strength of our algorithm. +ACKNOWLEDGMENT +The study presented in this paper is supported by Alberta +Innovates and Natural Sciences and Engineering Research +Council of Canada (NSERC). +REFERENCES +[1] G. Durisi, T. Koch, and P. Popovski, “Toward massive, ultrareliable, and +low-latency wireless communication with short packets,” Proceedings of +the IEEE, vol. 104, no. 9, pp. 1711–1726, 2016. +[2] L. D. Xu, W. He, and S. Li, “Internet of things in industries: A survey,” +IEEE Transactions on Industrial Informatics, vol. 10, no. 4, pp. 2233– +2243, 2014. +[3] A. Al-Fuqaha, M. Guizani, M. Mohammadi, M. Aledhari, and M. Ayyash, +“Internet of Things: A survey on enabling technologies, protocols, and +applications,” IEEE Communications Surveys Tutorials, vol. 17, no. 4, +pp. 2347–2376, 2015. +[4] C. Bockelmann, N. Pratas, H. Nikopour, K. Au, T. Svensson, C. Ste- +fanovic, P. Popovski, and A. Dekorsy, “Massive machine-type communi- +cations in 5G: Physical and MAC-layer solutions,” IEEE Communications +Magazine, vol. 54, no. 9, pp. 59–65, 2016. +[5] W. Ejaz and M. Ibnkahla, “Multiband spectrum sensing and resource +allocation for IoT in cognitive 5G networks,” IEEE Internet of Things +Journal, vol. 5, no. 1, pp. 150–163, 2018. +[6] Z. Ding, P. Fan, and H. V. Poor, “Impact of user pairing on 5G nonorthog- +onal multiple-access downlink transmissions,” IEEE Transactions on +Vehicular Technology, vol. 65, no. 8, pp. 6010–6023, 2016. +[7] Y. Saito, Y. Kishiyama, A. Benjebbour, T. Nakamura, A. Li, and +K. Higuchi, “Non-orthogonal multiple access (NOMA) for cellular future +radio access,” in 2013 IEEE 77th Vehicular Technology Conference (VTC +Spring), 2013, pp. 1–5. +[8] K. Au, L. Zhang, H. Nikopour, E. Yi, A. Bayesteh, U. Vilaipornsawai, +J. Ma, and P. Zhu, “Uplink contention based SCMA for 5G radio access,” +in 2014 IEEE Globecom Workshops (GC Wkshps), 2014, pp. 900–905. +[9] L. Liu, E. G. Larsson, W. Yu, P. Popovski, C. Stefanovic, and E. de +Carvalho, “Sparse signal processing for grant-free massive connectivity: +A future paradigm for random access protocols in the Internet of Things,” +IEEE Signal Processing Magazine, vol. 35, no. 5, pp. 88–99, Sep. 2018. +[10] S. Verdu et al., Multiuser detection. +Cambridge university press, 1998. +[11] Y. Zhang, Q. Guo, Z. Wang, J. Xi, and N. Wu, “Block sparse bayesian +learning based joint user activity detection and channel estimation for +grant-free noma systems,” IEEE Transactions on Vehicular Technology, +vol. 67, no. 10, pp. 9631–9640, 2018. +[12] H. Zhu and G. B. Giannakis, “Exploiting sparse user activity in multiuser +detection,” IEEE Transactions on Communications, vol. 59, no. 2, pp. +454–465, Feb. 2011. +[13] H. F. Schepker, C. Bockelmann, and A. Dekorsy, “Coping with CDMA +asynchronicity in compressive sensing multi-user detection,” in 2013 +IEEE 77th Vehicular Technology Conference (VTC Spring), Jun. 2013, +pp. 1–5. +[14] Z. Chen, F. Sohrabi, and W. Yu, “Sparse activity detection for massive +connectivity,” IEEE Transactions on Signal Processing, vol. 66, no. 7, +pp. 1890–1904, Apr. 2018. +[15] K. Takeuchi, T. Tanaka, and T. Kawabata, “Performance improvement +of iterative multiuser detection for large sparsely spread CDMA systems +by spatial coupling,” IEEE Transactions on Information Theory, vol. 61, +no. 4, pp. 1768–1794, Apr. 2015. +[16] Y. Wang, X. Zhu, E. G. Lim, Z. Wei, Y. Liu, and Y. Jiang, “Compressive +sensing based user activity detection and channel estimation in uplink +noma systems,” in 2020 IEEE Wireless Communications and Networking +Conference (WCNC). +IEEE, 2020, pp. 1–6. +[17] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. +MIT press +Cambridge, 2016, vol. 1. +[18] M. Mohammadkarimi, M. Mehrabi, M. Ardakani, and Y. Jing, “Deep +learning based sphere decoding,” IEEE Trans. Wireless Commun., pp. +1–1, 2019. +[19] X. Miao, D. Guo, and X. Li, “Grant-free NOMA with device activity +learning using long short-term memory,” IEEE Wireless Communications +Letters, pp. 1–1, 2020. +[20] W. Zhang, R. K. Mallik, and K. B. Letaief, “Cooperative spectrum sensing +optimization in cognitive radio networks,” in 2008 IEEE International +Conference on Communications, 2008, pp. 3411–3415. +[21] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” +arXiv preprint arXiv:1412.6980, 2014. +[22] T. T. Cai and L. Wang, “Orthogonal matching pursuit for sparse signal +recovery with noise,” IEEE Transactions on Information theory, vol. 57, +no. 7, pp. 4680–4688, 2011. +[23] D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algo- +rithms for compressed sensing,” Proceedings of the National Academy of +Sciences, vol. 106, no. 45, pp. 18 914–18 919, 2009. +[24] C. Goutte and E. Gaussier, “A probabilistic interpretation of precision, re- +call and f-score, with implication for evaluation,” in European conference +on information retrieval. +Springer, 2005, pp. 345–359. + diff --git a/HdAzT4oBgHgl3EQfUvz7/content/tmp_files/load_file.txt b/HdAzT4oBgHgl3EQfUvz7/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..d6867100ee82757b769cba265fe5c4f483b770ff --- /dev/null +++ b/HdAzT4oBgHgl3EQfUvz7/content/tmp_files/load_file.txt @@ -0,0 +1,387 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf,len=386 +page_content='Activity Detection for Grant-Free NOMA in Massive IoT Networks Mehrtash Mehrabi, Student Member, IEEE, Mostafa Mohammadkarimi, Member, IEEE, and Masoud Ardakani, Senior Member, IEEE Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, T6G 1H9, Canada Email: {mehrtash, mostafa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='mohammadkarimi, ardakani}@ualberta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='ca Abstract—Recently, grant-free transmission paradigm has been introduced for massive Internet of Things (IoT) networks to save both time and bandwidth and transmit the message with low latency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In order to accurately decode the message of each device at the base station (BS), first, the active devices at each transmission frame must be identified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In this work, first we investigate the problem of activity detection as a threshold comparing problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' We show the convexity of the activity detection method through analyzing its probability of error which makes it possible to find the optimal threshold for minimizing the activity detection error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Consequently, to achieve an optimum solution, we propose a deep learning (DL)-based method called convolutional neural network (CNN)-activity detection (AD).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In order to make it more practical, we consider unknown and time-varying activity rate for the IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Our simulations verify that our proposed CNN-AD method can achieve higher performance compared to the existing non-Bayesian greedy-based methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' This is while existing methods need to know the activity rate of IoT devices, while our method works for unknown and even time-varying activity rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Index Terms—Activity detection, IoT, deep learning, NOMA, massive MIMO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' INTRODUCTION W IRELESS technology recent advances provide massive connectivity for machines and objects resulting in the Internet of Things (IoT) [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The demand for the IoT is expected to grow drastically in the near future with numerous applications in health care systems, education, businesses and governmental services [2]–[4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' As the demand for connectivity in IoT systems is growing rapidly, it is crucial to improve the spectrum efficiency [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Hence, the non-orthogonal multiple access (NOMA) has been introduced [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' To address the main challenges of IoT, including access collisions and massive connectivity, NOMA allows devices to access the channel non-orthogonally by either power- domain [7] or code-domain [8] multiplexing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Meanwhile, this massive connectivity is highly affected by the conventional grant-based NOMA transmission scheme, where the exchange of control signaling between the base station (BS) and IoT devices is needed for channel access.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The excessive signaling overhead causes spectral deficiency and large transmission latency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Grant-free NOMA has been introduced to make a flexible transmission mechanism for the devices and save time and bandwidth by removing the need for the exchange of control signaling between the BS and devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Hence, devices can transmit data randomly at any time slot without any request- grant procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In many IoT applications, a few devices become active for a short period of time to communicate with the BS while others are inactive [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In IoT networks with a large number of nodes each with a small probability of activity, multiuser detection (MUD) methods heavily rely on activity detection (AD) prior to detection and decoding [4], [10]–[13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' For uplink transmission in IoT systems with grant-free NOMA transmission scheme, where the performance of MUD can be severely affected by the multi-access interference, the reliable detection of both activity and transmitted signal is very challenging and can be computationally expensive [10], [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' There have been many studies in the literature suggesting compressive sensing (CS) methods for joint activity and data detection [12]–[16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Although CS methods can achieve a re- liable MUD, they only work in networks with sporadic traffic pattern, and are expensive in terms of computational complexity [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Recently, deep learning (DL) models have observed a lot of interests in communication systems and more specifically in signal detection [17]–[19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' A study in [19] suggests to use DL for activity and data detection, however they consider a deterministic traffic pattern for the activity which is not valid in all environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In this work, we first formulate the problem of IoT activity detection as a threshold comparing problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' We then analyze the probability of error of this activity detection method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Observing that this probability of error is a convex function of the decision threshold, we raise the question of finding the optimal threshold for minimizing the activity detection error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' To achieve this goal, we propose a convolutional neural network (CNN)-based AD algorithm for grant-fee code-domain uplink NOMA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Unlike existing CS-based AD algorithms, our solution does not need to know the exact number of active devices or even the activity rate of IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In fact, in our system model we assume a time-varying and unknown activity rate and a heterogeneous network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Simulation results verify the success the proposed algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The rest of this paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' We present the system model in Section II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In Section III we formulate the device AD problem and derive its probability of error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Section IV introduces our CNN-based solution for device AD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The simulation results are presented in Section V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Finally, the paper is concluded in Section VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='01274v1 [eess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='SP] 23 Dec 2022 f N 1 2 tT \uf04c 2 1 s N s T \uf04c \uf04c \uf04c i-th Transmission Frame (i+1)-th Transmission Frame Channel Estimation Channel Estimation Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 1: CDMA slotted ALOHA transmission frame A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Notations Throughout this paper, (·)∗ represents the complex conjugate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Matrix transpose and Hermitian operators are shown by (·)T and (·)H, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The operator diag(b) returns a square diagonal matrix with the elements of vector b on the main diagonal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Furthermore, E[·] is the statistical expectation, ˆa denotes an estimated value for a, and size of set S is shown by |S|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The constellation and m-dimensional complex spaces are denoted by D and Cm, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Finally, the circularly symmetric complex Gaussian distribution with mean vector µ and covariance matrix Σ is denoted by CN(µ, Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' SYSTEM MODEL We consider a code-division multiple access (CDMA) uplink transmission, where K IoT devices communicate with a single IoT BS equipped with M receive antennas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' This commonly used model [3], [6], [19], also considers a frame structure for uplink transmission composed of a channel estimation phase followed by CDMA slotted ALOHA data transmissions as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In each frame, let Nf short packets of length Tt = NsTs, where Ns is the number of symbols per IoT packet and Ts is the symbol duration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' It is assumed that the channel is fixed during each frame, but it varies from one frame to another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The channel state information (CSI) is acquired at the BS during the channel estimation phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' As it is common in massive machine-type communications (mMTC), we assume that the IoT devices are only active on occasion and transmit short data packets during each frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The activity rate of the IoT devices is denoted by Pa ∈ [0, Pmax], which is assumed to be unknown and time-varying from one packet transmission to another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Let bk ∈ A be the transmitted symbol of the k- th device and chosen from a finite alphabet A, when the k-th device is active;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' otherwise, bk = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Consequently, bk can take values from an augmented alphabet ¯ A = A ∪ {0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' We also denote the set of all devices and the set of active devices by St = {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' , K} and Sa, respectively, where Sa ⊂ St.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='1 A unique spreading code is dedicated to each IoT device which is simultaneously used for the spreading purpose and de- vice identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' This removes the need for control signaling associated with IoT device identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Control signals are inefficient for short packet mMTC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The spreading sequence for the k-th IoT device is denoted by ck = [c1,k c2,k · · · cNc,k]T where ci,k ∈ {−1, +1} and Nc is the spreading factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' To 1For the simplicity of notation, we remove the index of frame and packet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' support a large number of devices, non-orthogonal spreading sequences are employed;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' resulting in NOMA transmission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' For a single frame, the complex channel coefficient between the k-th IoT device and the m-th receive antenna at the BS is denoted as gm,k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The active IoT device k, k ∈ Sa transmits Ns symbols denoted by bk = [bk,1, · · · , bk,Ns]T during a packet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The received baseband signal over Rayleigh flat fading channel in a single slot of the slotted ALOHA frame at the m-th receive antenna of the BS is expressed as Ym = K � k=1 gm,kckbT k + Wm, (1) where Wm ∈ CNc×Ns with wi,j ∼ CN(0, σ2 w) and E[wi,jw∗ u,v] = σ2 wδ[i − u]δ[j − v] is the additive white Gaussian noise (AWGN) matrix at the m-th receive an- tenna.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The equivalent channel matrix between all IoT devices and the m-th receive antenna can be expressed as Φm = [gm,1c1, · · · , gm,KcK] ∈ CNc×K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Thus, the received packet at the m-th (m = 1, 2, · · · , M) receive antenna is given by Ym = ΦmB + Wm, (2) where B = [b1, · · · , bK]T ∈ DK×Ns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Let the total set of all IoT devices be decomposed into a finite number of disjoint groups G1, G2, · · · , GS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Within group Gj, the power of every IoT device is given by Pj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The powers of the devices are equal within each group, but differ from group to group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The fraction of devices in group Gj is therefore |Gj|/K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' It is assumed that Pj is known at the BS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' This configuration captures heterogeneous IoT networks, where groups of IoT devices capture different phenomenon in a given geographical area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' A single group of IoT devices with equal power transmission, resulting in a homogeneous network, is also studied in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' PROBLEM FORMULATION In this section, we present the problem of IoT device AD in the cases of known CSI at the receiver and in the presence of sparse or non-sparse transmission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In order to detect the active devices, it is assumed that the BS is equipped with a match filter and the precoding matrix and CSI Φm is available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Before AD, the observation matrix at the m-th receive antenna ym is passed through the decorrelator to obtain Ym = ΦH mYm ∈ CK×Ns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' (3) In the following, we investigate the details of the AD problem based on the Gaussian detection to show how a threshold can be computed to distinguish active IoT devices from inactive ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The output of the decorrelator receiver for the m-th receive antenna is expressed as Ym = ΦH mΦmB + ΦH mWm, = � ����� �K k=1 g∗ m,1gm,kcT 1 ckbT k + g∗ m,1cT 1 Wm �K k=1 g∗ m,2gm,kcT 2 ckbT k + g∗ m,2cT 2 Wm .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' �K k=1 g∗ m,Kgm,kcT KckbT k + g∗ m,KcT KWm � ����� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' (4) Consequently, the received signal from the k-th user at the m-th receive antenna is rm k = ||gm,kck||2 2bT k + K � i=1(i̸=k) g∗ m,kgm,icT k cibT i +g∗ m,kcT k Wm, (5) where the second and third terms are multi user interference and additive noise, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Since an IoT device is either active or inactive for the entire packet transmission, we determine the activity status of a device based on each received symbol and then use the results in [20] for spectrum sensing and combine the obtained results from all Ns symbols.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The device AD in the case of single symbol transmission is studied in [12], and we follow that to determine the status of each device based on each received symbol and then combine the results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The j-th received symbol from device k at receive antenna m, denoted as rm k,j, is rm k,j =||gm,kck||2 2bk,j+ K � i=1(i̸=k) g∗ m,kgm,icT k cibi,j + g∗ m,kcT k wj, (6) where the first term is the main signal, the second term is multi user interference from other devices, and the third term is the additive noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' For the sake of simplicity we assume that BPSK modulation is used, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=', the transmitted symbols are drawn from A = {−1, +1} and p(bk,j = +1) = p(bk,j = −1) = 1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The multi user interference plus noise in rm k,j has variance σ2 k,j = var � K � i=1(i̸=k) g∗ m,kgm,icT k cibi,j + g∗ m,kcT k wj � = K � i=1(i̸=k) |g∗ m,kgm,icT k ci|2Pa + ||g∗ m,kcT k ||2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' (7) Now we can approximate rm k,j by a Gaussian distribution as N(||gm,kck||2 2, σ2 k,j) [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In order to identify the activity of device k, our goal is to propose an algorithm to define threshold τ and set device k as active if |rm k,j| > τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Then the probability of error, Pe, is computed as P k,j e =Pap(|rm k,j| < τ|bk,j ̸= 0) + 2(1 − Pa)p(|rm k,j| > τ|bk,j = 0), (8) where we have p(rm k,j|bk,j ̸= 0) ∼ N(||gm,kck||2 2, σ2 k,j) and p(rm k,j|bk,j = 0) ∼ N(0, σ2 k,j).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' We can rewrite (8) as P k,j e = 2(1 − Pa)Q( τ σk,j ) + PaQ(||gm,kck||2 2 − τ σk,j ), (9) where Q(x) = (1/ √ 2π) � ∞ x exp(−t2/2)dt denotes the Gaus- sian tail function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The probability of error in (9) is a convex function of τ and hence, a fine tuned neural network is capable of solving this problem and detect the active devices by finding the optimum τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In the following section, we define our DL- based algorithm to find the optimum τ and minimize the probability of error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' DL-BASED AD Device AD is the first step toward effective MUD in a grant- free uplink multiple access.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The recent studies on AD suggest to use CS methods to identify the set of active devices [14], [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' However, these methods fail in the practical scenarios, where the activity rate is time-varying and/or unknown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Moreover, these methods are mainly effective for low device activity rate scenarios, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=', when sparsity level is high [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In this section, we propose our AD algorithms called CNN-AD by employing a CNN for heterogeneous IoT networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' By employing a suitably designed CNN, the underlying pattern in device activity can be easily learnt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' CNN-AD Algorithm Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 2 illustrates the structure of the proposed CNN-AD algo- rithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' As seen, it is composed of there blocks: 1) preprocessing, 2) CNN processing, and 3) hypothesis testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In the preprocessing step after sequence matched filtering, we first sort the observation matrix from all M receive antennas in a 3D Tensor as R = � ���� � P ¯Y1 � � P ¯Y2 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' � P ¯YM � � ���� (10) where PYm ∈ CK×Ns, Ym = ΦH mYm ∈ CK×Ns for m = 1, 2, · · · , M, and P ≜ diag(p1, · · · , pK), pk ∈ {1/P1, · · · , 1/PS} for k = 1, 2, · · · , K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In the CNN processing block, the 3D Tensor R is used as the input of a suitable designed CNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The CNN models benefit from the convolutional layers performing convolution operations between matrices instead of multiplication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Thus, it leads to dimension reduction for feature extraction and provides a new input to the next network layers which includes only the useful features of the original high-dimensional input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The IoT device AD can be formulated as a binary classification or regression problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Formulating device AD as a classification problem is straightforward, but it requires the accurate design of the CNN’s structure and parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In the hypothesis testing block, the K outputs of the CNN’s Sigmoid layer is compared with a predefined threshold to determine the activity status of the IoT devices in the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' If the k-th node of the Sigmoid layer exceeds the threshold, the k-th IoT device is identified as active.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Training Phase In order to train the designed CNN, we define the activity vector a as a = [a1 a2 · · aK]T , (11) where ak is 1 when the k-th IoT device is active and 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' We train our model with N independent training samples (R (j),a(j)), where j = 1, 2, · · · , N and a(j) and R (j) are the activity vector and observation matrix of the j-th training sample, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Our objective is to train the designed CNN to generate the desired output vector a(j) Preprocessing ] , , , [ 2 1 M Y Y Y \uf04c Received Message at M Antennas \uf0fa \uf0fa \uf0fa \uf0fa \uf0fb \uf0f9 \uf0ea \uf0ea \uf0ea \uf0ea \uf0eb \uf0e9 \uf03d ] [ ] [ ] [ 2 1 M Y P Y P Y P R \uf04d CNN Input M K s N CONV 3*3, stride=3, pad=same 128 kernels 128 MAX_POOL, 2*2, stride=2, 2 M 2 K FC \uf04d 1024 ReLU FC \uf04d K Sigmoid Hypothesis Testing \uf04d ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='0 \uf0b3 a S 128 M K Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 2: Model structure of the proposed CNN-AD algorithm for input matrix R (j).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The model tries to learns non-linear transformation Ψ such that ˆa(j) = Ψ(R (j);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Θ), (12) where Θ is the set of parameters learned during the training by minimizing the loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The output of model, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' ˆa determines the activity probabilities of the IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Here since there are two classes (active or inactive) for each IoT device, the loss function is chosen as the binary cross- entropy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' For each training sample j, the binary cross-entropy loss function compares the probability that the IoT devices are active (ˆa(j)) with the true activity vector a(j) as Loss(Θ) = 1 N N � j=1 − � a(j) log(ˆa(j))+(1−a(j)) log(1−ˆa(j)) � , (13) where log(·) performs an element-wise log operation on ˆa(j), and the vector multiplication is also element-wise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' EXPERIMENTS In this section, we evaluate the performance of the proposed CNN-AD algorithm through various simulation experiments and compare it with some of the existing methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Simulation Setup We consider an IoT network with K devices where K > Nc and pseudo-random codes are used as the spreading sequences for IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The probability of activity Pa is considered to be unknown and time-varying from one packet to another in the range of Pa ∈ [0, Pmax], where Pmax = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The BPSK modulation is used for uplink transmission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Without loss of generality, the channel coefficient between IoT devices and the BS is modeled as independent zero-mean complex Gaussian random variables with variance σ2 k,m = 1, k ∈ St and m ∈ {1, · · · , M}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The additive white noise is modeled as zero-mean complex Gaussian random variables with variance σ2 w, and the signal-to-noise ratio (SNR) in dB is defined as γ ≜ 10 log(σ2 s /σ2 w), where σ2 s = PaPt is the average transmit power with Pt = �K k=1 pk as the total transmission power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Unless otherwise mentioned, we consider spreading sequences with spreading factor Nc = 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In order to train CNN-AD, we generate 105 independent samples and use 80% for training and the rest for validation and test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Adam optimizer [21] with learning rate of 10−3 is used to minimize cross-entropy loss function in (13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 0 2 4 6 8 10 12 14 16 18 20 SNR 10 3 10 2 10 1 AER OMP (Uniform Power) AMP (Uniform Power) CNN_Based (Uniform Power) OMP (non-Uniform Power) AMP (non-Uniform Power) CNN_Based (non-Uniform Power) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 3: Achieved BER with MMSE with a priory AD of OMP, AMP, and CNN-AD without knowing the number of active devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='8 Activity Rate 10 2 10 1 BER OMP (Uniform Power) AMP (Uniform Power) CNN-AD (Uniform Power) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 4: Impact of Pa on the performance of different methods as the priory AD for MMSE in terms of achieved BER.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Simulation Results 1) Performance Evaluation of CNN-AD: We assess CNN- AD through various simulations and compare it with the exist- ing CS-based methods including orthogonal matching pursuit (OMP) [22] and approximate message passing (AMP) [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The impact of SNR on the activity error rate (AER) achieved by different AD algorithms in both homogeneous and hetero- geneous IoT networks with uniform and non-uniform power allocation is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' The AER of different methods are compared for a wide range of SNRs in an IoT system with total K = 40 IoT devices and a single BS with M = 100 receive antennas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' As expected, the AER of all AD algorithms decreases with increasing SNR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' However, CNN-AD achieves IoT Device Model Precision Recall F1-score OMP 28% 32% 30% Device A AMP 31% 35% 33% CNN-AD 73% 92% 81% OMP 33% 32% 32% Device B AMP 38% 35% 36% CNN-AD 100% 83% 91% TABLE I: Performance analysis different algorithms for two typical IoT devices for Pmax = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='1 at γ = 10 dB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' the best performance since unlike the non-Bayesian greedy algorithms OMP and AMP, our method relies on the statistical distributions of device activities and channels and exploit them in the training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 4 illustrates the effect of activity rate on the bit error rate (BER) for minimum mean square error (MMSE)-MUD with different AD algorithms at γ = 10 dB SNR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' As seen, as the activity rate increases, the number of active devices also increases accordingly and thus it becomes difficult to detect all the active devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' This results in a higher BER.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' We use Pmax = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='1 to train CNN-AD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Thus, the MMSE-MUD with CNN-AD shows performance degradation for the activity rates of larger than Pmax = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' However, it still outperforms the performance of the MMSE-MUD with OMP and AMP AD algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' It should be mentioned that this performance improves when CNN-AD is trained for a larger value of Pmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' We further investigate the AD algorithms in terms of other metrics for two typical IoT devices for Pmax = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='1 at γ = 10 dB SNR, presented in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In this table we compare the precision, recall, and F1-score, defined in [24], achieved by CNN-AD with OMP and AMP AD algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' As seen, all metrics are improved by using CNN-AD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' CONCLUSIONS In this paper, we consider the problem of AD in IoT networks in grant-free NOMA systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Based on the application, IoT devices can be inactive for a long period of time and only active in the time of transmission to the BS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Hence, identifying the active devices is required for an accurate data detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Some studies propose CS-based method for AD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' However, high level of message sparsity is necessary for those methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' In order to remove this need and exploit the statistical properties of the channels we propose a CNN-based method called CNN-AD to detect active IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Comparison with available methods shows the strength of our algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' ACKNOWLEDGMENT The study presented in this paper is supported by Alberta Innovates and Natural Sciences and Engineering Research Council of Canada (NSERC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' REFERENCES [1] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Durisi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Koch, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Popovski, “Toward massive, ultrareliable, and low-latency wireless communication with short packets,” Proceedings of the IEEE, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 104, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 1711–1726, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [2] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Xu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' He, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Li, “Internet of things in industries: A survey,” IEEE Transactions on Industrial Informatics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 10, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 2233– 2243, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [3] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Al-Fuqaha, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Guizani, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Mohammadi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Aledhari, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Ayyash, “Internet of Things: A survey on enabling technologies, protocols, and applications,” IEEE Communications Surveys Tutorials, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 17, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 2347–2376, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [4] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Bockelmann, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Pratas, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Nikopour, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Au, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Svensson, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Ste- fanovic, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Popovski, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Dekorsy, “Massive machine-type communi- cations in 5G: Physical and MAC-layer solutions,” IEEE Communications Magazine, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 54, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 59–65, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [5] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Ejaz and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Ibnkahla, “Multiband spectrum sensing and resource allocation for IoT in cognitive 5G networks,” IEEE Internet of Things Journal, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 5, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 150–163, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [6] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Ding, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Fan, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Poor, “Impact of user pairing on 5G nonorthog- onal multiple-access downlink transmissions,” IEEE Transactions on Vehicular Technology, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 65, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 6010–6023, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [7] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Saito, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Kishiyama, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Benjebbour, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Nakamura, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Li, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Higuchi, “Non-orthogonal multiple access (NOMA) for cellular future radio access,” in 2013 IEEE 77th Vehicular Technology Conference (VTC Spring), 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 1–5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [8] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Au, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Nikopour, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Yi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Bayesteh, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Vilaipornsawai, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Ma, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Zhu, “Uplink contention based SCMA for 5G radio access,” in 2014 IEEE Globecom Workshops (GC Wkshps), 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 900–905.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [9] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Liu, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Larsson, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Yu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Popovski, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Stefanovic, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' de Carvalho, “Sparse signal processing for grant-free massive connectivity: A future paradigm for random access protocols in the Internet of Things,” IEEE Signal Processing Magazine, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 35, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 88–99, Sep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [10] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Verdu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=', Multiuser detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Cambridge university press, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [11] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Zhang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Guo, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Xi, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Wu, “Block sparse bayesian learning based joint user activity detection and channel estimation for grant-free noma systems,” IEEE Transactions on Vehicular Technology, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 67, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 9631–9640, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [12] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Zhu and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Giannakis, “Exploiting sparse user activity in multiuser detection,” IEEE Transactions on Communications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 59, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 454–465, Feb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [13] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Schepker, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Bockelmann, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Dekorsy, “Coping with CDMA asynchronicity in compressive sensing multi-user detection,” in 2013 IEEE 77th Vehicular Technology Conference (VTC Spring), Jun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 1–5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [14] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Chen, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Sohrabi, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Yu, “Sparse activity detection for massive connectivity,” IEEE Transactions on Signal Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 66, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 1890–1904, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [15] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Takeuchi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Tanaka, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Kawabata, “Performance improvement of iterative multiuser detection for large sparsely spread CDMA systems by spatial coupling,” IEEE Transactions on Information Theory, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 61, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 1768–1794, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [16] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Zhu, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Lim, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Wei, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Liu, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Jiang, “Compressive sensing based user activity detection and channel estimation in uplink noma systems,” in 2020 IEEE Wireless Communications and Networking Conference (WCNC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' IEEE, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [17] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Goodfellow, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Bengio, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Courville, Deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' MIT press Cambridge, 2016, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [18] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Mohammadkarimi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Mehrabi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Ardakani, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Jing, “Deep learning based sphere decoding,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Wireless Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=', pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 1–1, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [19] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Miao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Guo, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Li, “Grant-free NOMA with device activity learning using long short-term memory,” IEEE Wireless Communications Letters, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 1–1, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [20] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Zhang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Mallik, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Letaief, “Cooperative spectrum sensing optimization in cognitive radio networks,” in 2008 IEEE International Conference on Communications, 2008, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 3411–3415.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [21] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Kingma and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content='6980, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [22] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Cai and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Wang, “Orthogonal matching pursuit for sparse signal recovery with noise,” IEEE Transactions on Information theory, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 57, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 4680–4688, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [23] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Donoho, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Maleki, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Montanari, “Message-passing algo- rithms for compressed sensing,” Proceedings of the National Academy of Sciences, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 106, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 45, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 18 914–18 919, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' [24] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Goutte and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Gaussier, “A probabilistic interpretation of precision, re- call and f-score, with implication for evaluation,” in European conference on information retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' Springer, 2005, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} +page_content=' 345–359.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HdAzT4oBgHgl3EQfUvz7/content/2301.01274v1.pdf'} diff --git a/JNFIT4oBgHgl3EQfZCuh/content/2301.11251v1.pdf b/JNFIT4oBgHgl3EQfZCuh/content/2301.11251v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..caab41245b0f772d18db40628f187830fb7a95be --- /dev/null +++ b/JNFIT4oBgHgl3EQfZCuh/content/2301.11251v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:615d060741c38e062c0ac71fb4ed52ad7f261a247a90b42f7044d142d49de416 +size 5642271 diff --git a/JdE2T4oBgHgl3EQfUgew/vector_store/index.faiss b/JdE2T4oBgHgl3EQfUgew/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..85cd831f68665cf9932afea9963e54755a5c0a62 --- /dev/null +++ b/JdE2T4oBgHgl3EQfUgew/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15cc58fa3787a33564790da1b006ea35075e5166608ec5539f64846362378e9e +size 655405 diff --git a/KdFOT4oBgHgl3EQfzDRI/content/2301.12930v1.pdf b/KdFOT4oBgHgl3EQfzDRI/content/2301.12930v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..69fa8ad176bfb531d82e6135bf65916ee297061f --- /dev/null +++ b/KdFOT4oBgHgl3EQfzDRI/content/2301.12930v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d7423488e44e8f2586f6c1205cc378cd6d115fc280a8580b628733d4d97d429 +size 1666446 diff --git a/MNE4T4oBgHgl3EQfiw29/content/tmp_files/2301.05137v1.pdf.txt b/MNE4T4oBgHgl3EQfiw29/content/tmp_files/2301.05137v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..b0a90aeb89ac63cdc641cdccc28c95d604adee23 --- /dev/null +++ b/MNE4T4oBgHgl3EQfiw29/content/tmp_files/2301.05137v1.pdf.txt @@ -0,0 +1,2136 @@ +Springer Nature 2021 LATEX template +Density functions of periodic sequences of continuous events +Olga Anosova1 and Vitaliy Kurlin1* +1*Computer Science, University of Liverpool, Ashton street, Liverpool, L69 3BX, UK. +*Corresponding author(s). E-mail(s): vitaliy.kurlin@gmail.com; +Contributing authors: oanosova@liverpool.ac.uk; +Abstract +Periodic Geometry studies isometry invariants of periodic point sets that are also continuous under +perturbations. The motivations come from periodic crystals whose structures are determined in +a rigid form but any minimal cells can discontinuously change due to small noise in measure- +ments. For any integer k ≥ 0, the density function of a periodic set S was previously defined +as the fractional volume of all k-fold intersections (within a minimal cell) of balls that have a +variable radius t and centers at all points of S. This paper introduces the density functions for +periodic sets of points with different initial radii motivated by atomic radii of chemical elements +and by continuous events occupying disjoint intervals in time series. The contributions are explicit +descriptions of the densities for periodic sequences of intervals. The new densities are strictly +stronger and distinguish periodic sequences that have identical densities in the case of zero radii. +Keywords: computational geometry, periodic set, periodic time series, isometry invariant, density function +MSC Classification: 68U05 , 51K05 , 51N20 , 51F30 , 51F20 +1 Motivations for the density +functions of periodic sets +This work substantially extends the previous +conference paper [3] in Discrete Geometry and +Mathematical Morphology 2022. The past work +explicitly described the density functions for peri- +odic sequences of zero-sized points. The new work +extends these analytic descriptions to periodic +sequences whose points have non-negative radii. +The proposed extension to the weighted case is +motivated by crystallography and materials chem- +istry [1] because all chemical elements have differ- +ent atomic radii. In dimension 1, the key motiva- +tion is the study of periodic time series consisting +of continuous and sequential (non-overlapping) +events represented by disjoint intervals. Any such +interval [a, b] ⊂ R for a ≤ b is the one-dimensional +ball with the center a + b +2 +and radius b − a +2 +. +The point-set representation of periodic crys- +tals is the most fundamental mathematical model +for crystalline materials because nuclei of atoms +are well-defined physical objects, while chemical +bonds are not real sticks or strings but abstractly +represent inter-atomic interactions depending on +many thresholds for distances and angles. +Since crystal structures are determined in a +rigid form, their most practical equivalence is rigid +motion (a composition of translations and rota- +tions) or isometry that maintains all inter-point +distances and includes also mirror reflections [20]. +Now we introduce the key concepts. Let Rn be +Euclidean space, Z be the set of all integers. +1 +arXiv:2301.05137v1 [cs.CG] 12 Jan 2023 + +Springer Nature 2021 LATEX template +2 +Density functions of periodic sequences + 0 + 0.2 + 0.4 + 0.6 + 0.8 + 1 + 0 + 0.2 + 0.4 + 0.6 + 0.8 + 1 + 1.2 + 1.4 + 1.6 +ψkA(t) +Radius of Balls +ψ0A + ψ1A + ψ2A + ψ3A + ψ4A + ψ5A + ψ6A + ψ7A + ψ8A + 0 + 0.2 + 0.4 + 0.6 + 0.8 + 1 + 0 + 0.2 + 0.4 + 0.6 + 0.8 + 1 + 1.2 + 1.4 + 1.6 +Σ1nψkA(t) +Radius of Balls +n = 1 +n = 2 +n = 3 +n = 4 +n = 5 +n = 6 +n = 7 +n = 8 +Fig. 1 Illustration of Definition 1.2 for the hexagonal lattice. Left: subregions Uk(t) are covered by k disks for the radii +t = 0.25, 0.55, 0.75, 1. Right: the densities ψk are above the corresponding densigram of accumulated functions +k +� +i=1 +ψi(t). +Definition 1.1 (a lattice Λ, a unit cell U, a +motif M, a periodic point set S = M + Λ). +For any linear basis v1, . . . , vn of Rn, a lattice +is Λ += +{ +n� +i=1 +civi +: +ci +∈ +Z}. The unit cell +U(v1, . . . , vn) = +� n� +i=1 +civi : ci ∈ [0, 1) +� +is the par- +allelepiped spanned by the basis above. A motif +M ⊂ U is any finite set of points p1, . . . , pm ∈ U. +A periodic point set [20] is the Minkowski sum +S = M + Λ = {u + v | u ∈ M, v ∈ Λ}. +■ +In dimension n = 1, a lattice is defined by any +non-zero vector v ∈ R, any periodic point set S +is a periodic sequence {p1, . . . , pm} + vZ with the +period v equal to the length of the vector v. +Definition 1.2 (density functions for periodic +sets of points with radii). Let a periodic set S = +Λ + M ⊂ Rn have a unit cell U. For every point +p ∈ M, fix a radius r(p) ≥ 0. For any integer +k ≥ 0, let Uk(t) be the region within the cell U +covered by exactly k closed balls ¯B(p; r(p) + t) +for t ≥ 0 and all points p ∈ M and their transla- +tions by Λ. The k-th density function ψk[S](t) = +Vol[Uk(t)]/Vol[U] is the fractional volume of the +k-fold intersections of these balls within U. +■ +The density ψk[S](t) can be interpreted as the +probability that a random (uniformly chosen in U) +point q is at a maximum distance t to exactly k +balls with initial radii r(p) and all centers p ∈ S. +For k = 0, the 0-th density ψ0[S](t) mea- +sures the fractional volume of the empty space not +covered by any expanding balls ¯B(p; r(p) + t) +In the simplest case of radii r(p) = 0, the infi- +nite sequence Ψ[S] = {ψk(t)}+∞ +k=0 was called in +[8, section 3] the density fingerprint of a periodic +point set S. For k = 1 and small t > 0 while +all equal-sized balls ¯B(p; t) remain disjoint, the +1st density ψ1[S](t) increases proportionally to tn +but later reaches a maximum and eventually drops +back to 0 when all points of Rn are covered of by at +least two balls. See the densities ψk, k = 0, . . . , 8 +for the square and hexagonal lattices in [8, Fig. 2]. +The original densities helped find a missing +crystal in the Cambridge Structural Database, +which was accidentally confused with a slight per- +turbation (measured at a different temperature) +of another crystal (polymorph) with the same +chemical composition, see [8, section 7]. +The new weighted case with radii r(p) ≥ 0 in +Definition 1.2 is even more practically important +due to different Van der Waals radii, which are +individually defined for all chemical elements. +The key advantage of density functions over +other isometry invariants of periodic crystals + +Springer Nature 2021 LATEX template +Density functions of periodic sequences +3 +(such as symmetries or conventional representa- +tions based on a geometry of a minimal cell) is +their continuity under perturbations, see details in +section 2 reviewing the related past work. +The only limitation is the infinite size of den- +sities ψk(t) due to the unbounded parameters: +integer index k ≥ 0 and continuous radius t ≥ 0. +We state the following problem in full general- +ity to motivate future work on these densities. +Problem 1.3 (computation of ψk). Verify if the +density functions ψk[S](t) from Definition 1.2 can +be computed in a polynomial time (in the size m +of a motif of S) for a fixed dimension n. +■ +The main contribution is the full solution of +Problem 1.3 for n = 1. Theorems 3.2, 4.2, 5.2, 6.2, +and Corollary 6.4 efficiently compute all ψk[S](t) +depending on infinitely many values of k and t. +2 Review of related past work +Periodic Geometry was initiated in 2020 by the +problem [14, section 2.3] to design a computable +metric on isometry classes of lattices, which is +continuous under perturbations of a lattice basis. +Though a Voronoi domain is combinatorially +unstable under perturbations, its geometric shape +was used to introduce two continuous metrics [14, +Theorems 2, 4] requiring approximations due to a +minimization over infinitely many rotations. +Similar minimizations over rotations or other +continuous parameters are required for the com- +plete invariant isosets [2, 4] and density functions, +which can be practically computed in low dimen- +sions [16], whose completeness was proved for +generic periodic point sets in R3 [8, Theorem 2]. +The density fingerprint Ψ[S] turned out to be +incomplete [8, section 5] in the example below. +Example 2.1 (periodic sequences S15, Q15 ⊂ R). +Widdowson et al. [20, Appendix B] discussed +homometric sets that can be distinguished by +the invariant AMD (Average Minimum Distances) +and not by diffraction patterns. The sequences +S15 = {0, 1, 3, 4, 5, 7, 9, 10, 12} + 15Z, +Q15 = {0, 1, 3, 4, 6, 8, 9, 12, 14} + 15Z +have the unit cell [0, 15] shown as a circle in Fig. 2. +Fig. 2 Circular versions of the periodic sets S15, Q15. +These periodic sequences [9] are obtained as +Minkowski sums S15 = U + V + 15Z and Q15 = +U − V + 15Z for U = {0, 4, 9}, V = {0, 1, 3}. +■ +For rational-valued periodic sequences, [9, +Theorem 4] proved that r-th order invariants +(combinations of r-factor products) up to r = 6 +are enough to distinguish such sequences up to a +shift (a rigid motion of R without reflections). +The AMD invariant was extended to the Point- +wise Distance Distribution (PDD), whose generic +completeness [19, Theorem 4.4] was proved in +any dimension n ≥ 1. However there are finite +sets in R3 [15, Fig. S4] with the same PDD, +which were distinguished by more sophisticated +distance-based invariants in [18, appendix C]. +The +subarea +of +Lattice +Geometry +devel- +oped continuous parameterizations for the moduli +spaces of lattices considered up to isometry in +dimension two [7, 13] and three [6, 10]. +For 1-periodic sequences of points in Rn, com- +plete isometry invariants with continuous and +computable metrics appeared in [12], see related +results for finite clouds of unlabeled points [11, 17]. +3 The 0-th density function ψ0 +This section proves Theorem 3.2 explicitly describ- +ing the 0-th density function ψ0[S](t) for any +periodic sequence S ⊂ R of disjoint intervals. +For convenience, scale any periodic sequence +S to period 1 so that S is given by points +0 ≤ p1 < · · · < pm < 1 with radii r1, . . . , rm, +respectively. Since the expanding balls in R are +growing intervals, volumes of their intersections +linearly change with respect to the variable radius +t. Hence any density function ψk(t) is piecewise +linear and uniquely determined by corner points +(aj, bj) where the gradient of ψk(t) changes. + +Springer Nature 2021 LATEX template +4 +Density functions of periodic sequences +To prepare the proof of Theorem 3.2, we first +consider Example 3.1 for the simple sequence S. +Example 3.1 (0-th density function ψ0). Let the +periodic sequence S = +� +0, 1 +3, 1 +2 +� ++ Z have three +points p1 = 0, p2 = 1 +3, p3 = 1 +2 of radii r1 = 1 +12, +r2 = 0, r3 = 1 +12, respectively. Fig. 3 shows each +point pi and its growing interval +Li(t) = [(pi−ri)−t, (pi+ri)+t] of the length 2ri+2t +for i = 1, 2, 3 in its own color: red, green, blue. +By +Definition +1.2 +each +density +function +ψk[S](t) measures a fractional length covered by +exactly k intervals within the unit cell [0, 1]. We +periodicaly map the endpoints of each growing +interval to the unit cell [0, 1]. For instance, the +interval [− 1 +12 − t, 1 +12 + t] of the point p1 = 0 ≡ 1 +(mod 1) maps to the red intervals [0, 1 +12 +t]∪[11 +12 − +t, 1] shown by solid red lines in Fig. 3. The same +image shows the green interval [1 +3 − t, 1 +3 + t] by +dashed lines and the blue interval [ 5 +12 − t, 7 +12 + t] +by dotted lines. +At the moment t = 0, since the starting inter- +vals are disjoint, they cover the length l = 2( 1 +12 + +0 + 1 +12) = 1 +3. The non-covered part of [0, 1] has +length 1 − 1 +3 = 2 +3. So the graph of ψ0(t) at t = 0 +starts from the point (0, 2 +3), see Fig. 4. +At the first critical moment t = 1 +24 when the +green and blue intervals collide at p = 3 +8, only +the intervals [1 +8, 7 +24] ∪ [5 +8, 7 +8] of total length +5 +12 +remain uncovered. Hence ψ0(t) linearly drops to +the point ( 1 +12, 5 +12). At the next critical moment +t = 1 +8 when the red and green intervals collide at +p = +5 +24, only the interval [17 +24, 19 +24] of length +1 +12 +remain uncovered, so ψ0(t) continues to (1 +8, 1 +12). +The graph of ψ0(t) finally returns to the t-axis +at the point (1 +6, 0) and remains there for t ≥ 1 +6. +The piecewise linear behavior of ψ0(t) can be +described by specifying the corner points in Fig. 4: +� +0, 2 +3 +� +, +� 1 +24, 5 +12 +� +, +�1 +8, 1 +12 +� +, +�1 +6, 0 +� +. +■ +Theorem 3.2 extends Example 3.1 to any peri- +odic sequence S and implies that the 0-th density +function ψ0(t) is uniquely determined by the +ordered gap lengths between successive intervals. +Theorem 3.2 (description of ψ0). Let a periodic +sequence S = {p1, . . . , pm} + Z consist of disjoint +intervals with centers 0 ≤ p1 < · · · < pm < 1 and +radii r1, . . . , rm ≥ 0. Consider the total length l = +2 +m +� +i=1 +ri and gaps between successive intervals gi = +(pi − ri) − (pi−1 + ri−1), where i = 1, . . . , m and +p0 = pm − 1, r0 = rm. Put the gaps in increasing +order: g[1] ≤ g[2] ≤ · · · ≤ g[m]. +Then +the +0-th +density +ψ0[S](t) +is +piecewise +linear +with +the +following +(unordered) +corner +points: +(0, 1 − l) +and +� +g[i] +2 , 1 − l − +i−1 +� +j=1 +g[j] − (m − i + 1)g[i] +� +for +i = 1, . . . , m, so the last corner is +�g[m] +2 , 0 +� +. +If any corners are repeated, e.g. when g[i−1] = +g[i], these corners are collapsed into one corner. ■ +Proof By Definition 1.2 the 0-th density function +ψ0(t) measures the total length of subintervals in the +unit cell [0, 1] that are not covered by any of the grow- +ing intervals Li(t) = [pi−ri−t, pi+ri+t], i = 1, . . . , m. +For t = 0, since all initial intervals Li(0) are disjoint, +they cover the total length 2 +m +� +i=1 +ri = l. +Then the graph of ψ0(t) at t = 0 starts from the +point (0, 1 − l). So ψ0(t) linearly decreases from the +initial value ψ0(0) = 1 − l except for m critical values +of t where one of the gap intervals [pi + ri + t, pi+1 − +ri+1−t] between successive growing intervals Li(t) and +Li+1(t) shrinks to a point. These critical radii t are +ordered according to the gaps g[1] ≤ g[2] ≤ · · · ≤ g[m]. +The first critical radius is t = +1 +2g[1], when a +shortest gap interval of the length g[1] is covered +by the growing successive intervals. At this moment + +Springer Nature 2021 LATEX template +Density functions of periodic sequences +5 +Fig. 3 The sequence S = +� +0, 1 +3 , 1 +2 +� ++ Z has the points of weights +1 +12 , 0, 1 +12 , respectively. The growing intervals around +the red point 0 ≡ 1 (mod 1), green point 1 +3 , blue point 1 +2 have the same color for various radii t, see Examples 3.1, 4.1, 5.1. +t = 1 +2g[1], all m growing intervals Li(t) have the total +length l + mg[1]. Then the 0-th density ψ0(t) has the +first corner points (0, 1−l) and +�g[1] +2 , 1 − l − mg[1] +� +. +The second critical radius is t += +g[2] +2 , when +all +intervals +Li(t) +have +the +total +length +l + +g[1] + (m − 1)g[2], i.e. the next corner point is +�g[2] +2 , 1 − l − g[1] − (m − 1)g[2] +� +. If g[1] = g[2], then +both corner points coincide, so ψ0(t) will continue +from the joint corner point. +The above pattern generalizes to the i-th critical +radius t = 1 +2g[i], when all covered intervals have the +total length +i−1 +� +j=1 +g[j] (for the fully covered intervals) +plus (m − i + 1)g[i] (for the still growing intervals). +For the final critical radius t = +g[m] +2 , the whole +unit cell [0, 1] is covered by the grown intervals because +m +� +j=1 +g[j] = 1 − l. The final corner is ( +g[m] +2 , 0). +□ +Example 3.3 applies Theorem 3.2 to get ψ0 +found for the periodic sequence S in Example 3.1. + +1/3Springer Nature 2021 LATEX template +6 +Density functions of periodic sequences +Fig. 4 The 0-th density function ψ0(t) for the 1-period +sequence S whose points 0, 1 +3 , 1 +2 +have radii +1 +12 , 0, 1 +12 , +respectively, see Example 3.1. +Example 3.3 (using Theorem 3.2). The sequence +S = +� +0, 1 +3, 1 +2 +� ++ Z in Example 3.1 with points +p1 = 0, p2 = 1 +3, p3 = 1 +2 of radii r1 = 1 +12, r2 = 0, +r3 = 1 +12, respectively, has l = 2(r1 + r2 + r3) = 1 +3 +and the initial gaps between successive intervals +g1 = p1 − r1 − p3 − r3 = (1 − 1 +12) − (1 +2 + 1 +12) = 1 +3, +g2 = p2 − r2 − p1 − r1 = (1 +3 − 0) − (0 + 1 +12) = 1 +4, +g3 = p3 − r3 − p2 − r2 = (1 +2 − 1 +12) − (1 +3 + 0) = 1 +12. +Order the gaps: g[1] = 1 +12 < g[2] = 1 +4 < g[3] = 1 +3. +1 − l = 1 − 1 +3 = 2 +3, +1 − l − 3g[1] = 2 +3 − 3 +12 = 5 +12, +1 − l − g[1] − 2g[2] = 2 +3 − 1 +12 − 2 +4 = 1 +12, +1 − l − g[1] − g[2] − g[3] = 2 +3 − 1 +12 − 1 +4 − 1 +3 = 0. +By Theorem 3.2 ψ0(t) has the corner points +(0, 1 − l) = +� +0, 2 +3 +� +, +�1 +2g[1], 1 − l − 3g[1] +� += +� 1 +24, 5 +12 +� +, +�1 +2g[2], 1 − l − g[1] − 2g[2] +� += +�1 +8, 1 +12 +� +, +�1 +2g[3], 1 − l − g[1] − g[2] − g[3] +� += +�1 +6, 0 +� +. See +the graph of the 0-th density ψ0(t) in Fig. 4. +■ +By Theorem 3.2 any 0-th density function +ψ0(t) is uniquely determined by the (unordered) +set of gap lengths between successive intervals. +Hence we can re-order these intervals with- +out changing ψ0(t). For instance, the periodic +sequence Q = {0, 1 +2, 2 +3} + Z with points 0, 1 +2, 2 +3 of +weights +1 +12, 1 +12, 0 has the same set ordered gaps +g[1] = +1 +12, d[2] = 1 +3, d[3] = 1 +2 as the periodic +sequence S = +� +0, 1 +3, 1 +2 +� ++ Z in Example 3.1. +The above sequences S, Q are related by the +mirror reflection t +�→ +1 − t. One can eas- +ily construct many non-isometric sequences with +ψ0[S](t) = ψ0[Q](t). For any 1 ≤ i ≤ m − 3, +the sequences Sm,i = {0, 2, 3, . . . , i + 2, i + 4, i + +5, . . . , m + 2} + (m + 2)Z have the same interval +lengths d[1] = · · · = d[m−2] = 1, d[m−1] = d[m] = 2 +but are not related by isometry (translations and +reflections in R) because the intervals of length 2 +are separated by i−1 intervals of length 1 in Sm,i. +4 The 1st density function ψ1 +This section proves Theorem 4.2 explicitly describ- +ing the 1st density ψ1[S](t) for any periodic +sequence S of disjoint intervals. To prepare the +proof of Theorem 4.2, Example 4.1 finds ψ1[S] for +the sequence S from Example 3.1. +Example 4.1 (ψ1 for S = +� +0, 1 +3, 1 +2 +� ++ Z). The +1st density function ψ1(t) can be obtained as a +sum of the three trapezoid functions ηR, ηG, ηB, +each measuring the length of a region covered by +a single interval of one color, see Fig. 3. +At the initial moment t = 0, the red intervals +[0, 1 +12] ∪ [11 +12, 1] have the total length ηR(0) = 1 +6. +These red intervals [0, 1 +12 + t] ∪ [11 +12 − t, 1] for +t ∈ [0, 1 +8] grow until they touch the green interval +[ 7 +24, 3 +8] and have the total length ηR(1 +8) = 1 +6 + 2 +8 = + +Springer Nature 2021 LATEX template +Density functions of periodic sequences +7 +Fig. 5 The trapezoid functions ηR, ηG, ηB and the 1st +density function ψ1(t) for the 1-period sequence S whose +points 0, 1 +3 , 1 +2 have radii 1 +12 , 0, 1 +12 , see Example 4.1. +5 +12 in the second picture of Fig. 3. So the graph of +the red length ηR(t) linearly grows with gradient 2 +from the point (0, 1 +6) to the corner point (1 +8, 5 +12). +For t ∈ [1 +8, 1 +6], the left red interval is shrink- +ing at the same rate (due to the overlapping green +interval) as the right red interval continues to grow +until t = 1 +6, when it touches the blue interval +[1 +4, 3 +4]. Hence the graph of ηR(t) remains constant +for t ∈ [1 +8, 1 +6] up to the corner point (1 +6, 5 +12). +After +that, +the +graph +of +ηR(t) +linearly +decreases (with gradient −2) until all red intervals +are fully covered by the green and blue intervals +at moment t = 3 +8, see the 6th picture in Fig. 3. +Hence the trapezoid function ηR has the piece- +wise linear graph through the corner points (0, 1 +6), +(1 +8, 5 +12), (1 +6, 5 +12), (3 +8, 0). After that, ηR(t) = 0 +remains constant for t ≥ 3 +8. Fig. 5 shows the +graphs of ηR, ηG, ηB and ψ1 = ηR + ηG + ηB. +■ +Theorem 4.2 extends Example 4.1 and proves +that any ψ1(t) is a sum of trapezoid functions +whose corners are explicitly described. We con- +sider any index i = 1, . . . , m (of a point pi or a +gap gi) modulo m so that m + 1 ≡ 1 (mod m). +Theorem 4.2 (description of ψ1). Let a periodic +sequence S = {p1, . . . , pm} + Z consist of disjoint +intervals with centers 0 ≤ p1 < · · · < pm < 1 and +radii r1, . . . , rm ≥ 0, respectively. +Consider the gaps gi = (pi−ri)−(pi−1+ri−1), +where i = 1, . . . , m and p0 = pm − 1, r0 = rm. +Then the 1st density ψ1(t) is the sum of m +trapezoid functions ηi, i = 1, . . . , m, with the +corners (0, 2ri), +�gi +2 , g + 2ri +� +, +�gi+1 +2 , g + 2ri +� +, +�gi + gi+1 +2 ++ ri, 0 +� +, where g = min{gi, gi+1}. +Hence ψ1(t) is determined by the unordered +set of unordered pairs (gi, gi+1), i = 1, . . . , m. +■ +Proof The 1st density ψ1(t) equals the total length +of subregions covered by exactly one of the intervals + +RSpringer Nature 2021 LATEX template +8 +Density functions of periodic sequences +Li(t) = [pi − ri − t, pi + ri + t], i = 1, . . . , m, where all +intervals are taken modulo 1 within [0, 1]. +Hence ψ1(t) is the sum of the functions η1i, each +measuring the length of the subinterval of Li(t) not +covered by other intervals Lj(t), j ∈ {1, . . . , m}−{i}. +Since the initial intervals Li(0) are disjoint, each +function η1i(t) starts from the value η1i(0) = 2ri and +linearly grows (with gradient 2) up to ηi(1 +2g) = 2ri+g, +where g = min{gi, gi+1}, when the growing interval +Li(t) of the length 2ri+2t = 2ri+g touches its closest +neighboring interval Li±1(t) with a shortest gap g. +If (say) gi < gi+1, then the subinterval covered +only by Li(t) is shrinking on the left and is grow- +ing at the same rate on the right until Li(t) touches +the growing interval Li+1(t) on the right. During +this growth, when t is between 1 +2gi and 1 +2gi+1, the +trapezoid function ηi(t) = g remains constant. +If gi = gi+1, this horizontal line collapses to one +point in the graph of ηi(t). For t ≥ max{gi, gi+1}, +the subinterval covered only by Li(t) is shrinking on +both sides until the neighboring intervals Li±1(t) meet +at a mid-point between their initial closest endpoints +pi−1 + ri−1 and pi+1 − ri+1. This meeting time is +t = 1 +2(pi+1 −ri+1 −pi−1 −ri−1) = 1 +2(gi +2ri +gi+1), +which is also illustrated by Fig. 6. So the trapezoid +function ηi has the corners (0, 2ri), +�gi +2 , 2ri + g +� +, +�gi+1 +2 +, 2ri + g +� +, +�gi + gi+1 +2 ++ ri, 0 +� +as expected. +□ +Example 4.3 applies Theorem 4.2 to get ψ1 +found for the periodic sequence S in Example 4.1. +Example 4.3 (using Theorem 4.2 for ψ1). The +sequence S = +� +0, 1 +3, 1 +2 +� ++ Z in Example 4.1 with +points p1 = 0, p2 = 1 +3, p3 = 1 +2 of radii r1 = 1 +12, +r2 = 0, r3 = 1 +12, respectively, has the initial gaps +between successive intervals g1 = 1 +3, g2 = 1 +4, g3 = +1 +12, see all the computations in Example 3.3. +Case (R). In Theorem 4.2 for the trapezoid func- +tion ηR = η1 measuring the fractional length +covered only by the red interval, we set k = 1 and +i = 1. Then ri = 1 +12, gi = 1 +3 and gi+1 = 1 +4, so +gi + gi+1 +2 ++ ri = 1 +2 +�1 +3 + 1 +4 +� ++ 1 +12 = 3 +8, +g = min{gi, gi+1} = 1 +4, g + 2ri = 1 +4 + 2 +12 = 5 +12. +Then ηR = η1 has the following corner points: +(0, 2ri) = +� +0, 1 +6 +� +, +�gi +2 , g + 2ri +� += +�1 +6, 5 +12 +� +, +�gi+1 +2 , g + 2ri +� += +�1 +8, 5 +12 +� +, +�gi + gi+1 +2 ++ ri, 0 +� += +�3 +8, 0 +� +, +where the two middle corners are accidentally +swapped due to gi > gi+1 but they define the same +trapezoid function as in the first picture of Fig. 5. +Case (G). In Theorem 4.2 for the trapezoid func- +tion ηG = η2 measuring the fractional length +covered only by the green interval, we set k = 1 +and i = 2. Then ri = 0, gi = 1 +4 and gi+1 = 1 +12, so +gi + gi+1 +2 ++ ri = 1 +2 +�1 +4 + 1 +12 +� ++ 0 = 1 +6, +g = min{gi, gi+1} = 1 +12, g + 2ri = 1 +12 + 0 = 1 +12. +Then ηG = η2 has the following corner points +exactly as shown in the second picture of Fig. 5: +(0, 2ri) = (0, 0) , +�gi +2 , g + 2ri +� += +�1 +8, 1 +12 +� +, +�gi+1 +2 , g + 2ri +� += +� 1 +24, 5 +12 +� +, +�gi + gi+1 +2 ++ ri, 0 +� += +�1 +6, 0 +� +. +Case (B). In Theorem 4.2 for the trapezoid func- +tion ηB = η3 measuring the fractional length +covered only by the blue interval, we set k = 1 and + +Springer Nature 2021 LATEX template +Density functions of periodic sequences +9 +Fig. 6 The distances g, s, g′ between line intervals used in the proofs of Theorems 4.2 and 5.2, shown here for k = 3. +i = 3. Then ri = 1 +12, gi = 1 +12 and gi+1 = 1 +3, so +gi + gi+1 +2 ++ ri = 1 +2 +� 1 +12 + 1 +3 +� ++ 1 +12 = 7 +24, +g = min{gi, gi+1} = 1 +12, g + 2ri = 1 +12 + 2 +12 = 1 +4. +Then ηB = η3 has the following corner points: +(0, 2ri) = +� +0, 1 +6 +� +, +�gi +2 , g + 2ri +� += +� 1 +24, 1 +4 +� +, +�gi+1 +2 , g + 2ri +� += +�1 +6, 1 +4 +� +, +�gi + gi+1 +2 ++ ri, 0 +� += +� 7 +24, 0 +� +exactly as shown in the third picture of Fig. 5. ■ +5 Higher density functions ψk +This section proves Theorem 5.2 describing the k- +th density function ψk[S](t) for any k ≥ 2 and a +periodic sequence S of disjoint intervals. +To prepare the proof of Theorem 5.2, Exam- +ple 5.1 computes ψ2[S] for S from Example 3.1. +Example 5.1 (ψ2 for S = +� +0, 1 +3, 1 +2 +� ++ Z). The +density ψ2(t) can be found as the sum of the trape- +zoid functions ηGB, ηBR, ηRG, each measuring the +length of a double intersection, see Fig. 3. +For the green interval [1 +3 −t, 1 +3 +t] and the blue +interval [ 5 +12 − t, 7 +12 + t], the graph of the function +ηGB(t) is piecewise linear and starts at the point +( 1 +24, 0) because these intervals touch at t = 1 +24. +The green-blue intersection [ 5 +12 −t, 1 +3 +t] grows +until t = 1 +6, when the resulting interval [1 +4, 1 +2] +touches the red interval on the left. At the same +time, the graph of ηGB(t) is linearly growing (with +gradient 2) to the corner (1 +6, 1 +4), see Fig, 7. +For t ∈ [1 +6, 7 +24], the green-blue intersection +interval becomes shorter on the left, but grows at +the same rate on the right until t = 7 +24 when [1 +8, 5 +8] +touches the red interval [5 +8, 1] on the right, see +the 5th picture in Fig. 3. So the graph of ηGB(t) +remains constant up to the point ( 7 +24, 1 +4). +For t ∈ [ 7 +24, 5 +12] the green-blue intersection +interval is shortening from both sides. So the +graph of ηGB(t) linearly decreases (with gradient +−2) and returns to the t-axis at the corner ( 5 +12, 0), +then remains constant ηGB(t) = 0 for t ≥ 5 +12. +Fig. 7 shows all trapezoid functions for double +intersections and ψ2 = ηGB + ηBR + ηRG. +■ +Theorem 5.2 (description of ψk for k ≥ 2). Let +a periodic sequence S = {p1, . . . , pm} + Z consist +of disjoint intervals with centers 0 ≤ p1 < · · · < +pm < 1 and radii r1, . . . , rm ≥ 0, respectively. +Consider the gaps gi = (pi − ri) − (pi−1 + ri−1) +between the successive intervals of S, where i = +1, . . . , m and p0 = pm − 1, r0 = rm. +For k ≥ 2, the density function ψk(t) equals +the sum of m trapezoid functions ηk,i(t), i = +1, . . . , m, each having the following corner points: +�s +2, 0 +� +, +�g + s +2 +, g +� +, +�s + g′ +2 +, g +� +, +�g + s + g′ +2 +, 0 +� +, + ++K +P +i+k-1 +1+1 +distance +distance +CSpringer Nature 2021 LATEX template +10 +Density functions of periodic sequences +Fig. 7 The trapezoid functions ηGB, ηBR, ηRG and the +2nd density function ψ2(t) for the 1-period sequence S +whose points 0, 1 +3 , 1 +2 have radii 1 +12 , 0, 1 +12 , see Example 5.1. +where g, g′ are the minimum and maximum values +in the pair {gi + 2ri, gi+k + 2ri+k−1}, and s = +i+k−1 +� +j=i+1 +gj + 2 +i+k−2 +� +j=i+1 +rj, so s = gi+1 for k = 2. +Hence ψk(t) is determined by the unordered +set of the ordered tuples (g, s, g′), i = 1, . . . , m. ■ +Proof The k-th density function ψk(t) measures the +total fractional length of k-fold intersections among m +intervals Li(t) = [pi − ri − t, pi + ri + t], i = 1, . . . , m. +Now we visualize all such intervals Li(t) in the line R +without mapping them modulo 1 to the unit cell [0, 1]. +Since all radii ri ≥ 0, only k successive inter- +vals can contribute to k-fold intersections. So a k-fold +intersection of growing intervals emerges only when +two intervals Li(t) and Li+k−1(t) overlap because +their intersection should be also covered by all the +intermediate intervals Li(t), Li+1(t), . . . , Li+k−1(t). +Then the density ψk(t) equals the sum of the m +trapezoid functions ηk,i, i = 1, . . . , m, each equal to +the length of the k-fold intersection ∩i+k−1 +j=i +Lj(t) not +covered by other intervals. Then ηk,i(t) remains 0 until +the first critical moment t when 2t equals the distance +between the points pi + ri and pi+k−1 − ri+k−1 in R, +see Fig. 6, so 2t = +i+k−1 +� +j=i+1 +gj + 2 +i+k−2 +� +j=i+1 +rj = s. Hence +t = s +2 and ( s +2, 0) is the first corner point of ηk,i(t). +At t = s +2, the interval of the k-fold intersection +∩i+k−1 +j=i +Lj(t) starts expanding on both sides. Hence +ηk,i(t) starts increasing (with gradient 2) until the +k-fold intersection touches one of the neighboring +intervals Li−1(t) or Li+k(t) on the left or on the right. +The left interval Li−1(t) touches the k-fold inter- +section ∩i+k−1 +j=i +Lj(t) when 2t equals the distance from +pi−1 + ri−1 (the right endpoint of Li−1) to pi+k−1 − +ri+k−1 (the left endpoint of Li+k−1), see Fig. 6, so +2t = +i+k−1 +� +j=i +gj + 2 +i+k−2 +� +j=i +rj = gi + 2ri + s. +The right interval Li+k−1(t′) touches the k-fold +intersection ∩i+k−1 +j=i +Lj(t′) when 2t′ equals the distance +from pi + ri (the right endpoint of Li) to pi+k − ri+k +(the left endpoint of Li+k), see Fig. 6, so +2t′ = +i+k +� +j=i+1 +gj + 2 +i+k−1 +� +j=i+1 +rj = s + gi+k + 2ri+k−1. +If (say) gi + 2ri = g < g′ = gi+k + 2ri+k−1, the +k-fold intersection ∩i+k−1 +j=i +Lj(t) first touches Li−1 at +the earlier moment t before reaching Li+k(t′) at the +later moment t′. At the earlier moment, ηk,i(t) equals +2(t − s +2) = gi + 2ri = g and has the corner (g + s +2 +, g). +After that, the k-fold intersection is shrinking on +the left and is expanding at the same rate on the right. +So the function ηk,i(t) = g remains constant until the +k-fold intersection touches the right interval Li+k(t′). + +GB +BR +RGSpringer Nature 2021 LATEX template +Density functions of periodic sequences +11 +At this later moment t′ = s + gi+k +2 ++ ri+k−1 = g′, +ηk,i(t′) still equals g and has the corner (s + g′ +2 +, g). +If gi + 2ri = g′ > g = gi+k + 2ri+k−1, the grow- +ing intervals Li−1(t) and Li+k−1(t) touch the k-fold +intersection ∩i+k−1 +j=i +Lj(t) in the opposite order. How- +ever, the above arguments lead to the same corners +(g + s +2 +, g) and (s + g′ +2 +, g) of ηk,i(t). If g = g′, the two +corners collapse to one corner in the graph of ηk,i(t). +The k-fold intersection ∩i+k−1 +j=i +Lj(t) becomes fully +covered when the intervals Li−1(t), Li+k(t). At this +moment, 2t equals the distance from pi−1 + ri−1 (the +right endpoint of Li−1) to pi+k − ri+k (the left end- +point of Li+k), see Fig. 6, so 2t = +i+k +� +j=i +gj +2 +i+k−1 +� +j=i +rj = +gi + 2ri + s + gi+k + 2ri+k−1 = g + s + g′. The graph +of ηk,i(t) has the final corner +�g + s + g′ +2 +, 0 +� +. +□ +Example 5.3 applies Theorem 5.2 to get ψ2 +found for the periodic sequence S in Example 3.1. +Example 5.3 (using Theorem 5.2 for ψ2). The +sequence S = +� +0, 1 +3, 1 +2 +� ++ Z in Example 4.1 with +points p1 = 0, p2 = 1 +3, p3 = 1 +2 of radii r1 = 1 +12, +r2 = 0, r3 = 1 +12, respectively, has the initial gaps +g1 = 1 +3, g2 = 1 +4, g3 = 1 +12, see Example 3.3. +In Theorem 5.2, the 2nd density function +ψ2[S](t) is expressed as a sum of the trapezoid +functions computed via their corners below. +Case (GB). For the function ηGB measuring the +double intersections of the green and blue intervals +centered at p2 = pi and p3 = pi+k−1, we set k = 2 +and i = 2. Then we have the radii ri = 0 and +ri+1 = 1 +12, the gaps gi = 1 +4, gi+1 = 1 +12, gi+2 = 1 +3, +and the sum s = gi+1 = 1 +12. The pair +{gi + 2ri, gi+2 + 2ri+1} = +�1 +4 + 0, 1 +3 + 2 +12 +� +has the minimum value g = 1 +4 and maximum value +g′ = 1 +2. Then η2,2[S](t) = ηGB has the following +corners as expected in the top picture of Fig. 7: +�s +2, 0 +� += +� 1 +24, 0 +� +, +�g + s +2 +, g +� += +�1 +2 +�1 +4 + 1 +12 +� +, 1 +4 +� += +�1 +6, 1 +4 +� +, +�s + g′ +2 +, g +� += +�1 +2 +� 1 +12 + 1 +2 +� +, 1 +4 +� += +� 7 +24, 1 +4 +� +, +�g + s + g′ +2 +, 0 +� += +�1 +2(1 +4 + 1 +12 + 1 +2), 0 +� += +� 5 +12, 0 +� +. +Case (BR). For the trapezoid function ηBR mea- +suring the double intersections of the blue and red +intervals centered at p3 = pi and p1 = pi+k−1, +we set k = 2 and i = 3. Then we have the radii +ri = +1 +12 = ri+1, the gaps gi = +1 +12, gi+1 = 1 +3, +gi+2 = 1 +4, and s = gi+1 = 1 +3. The pair +{gi + 2ri, gi+2 + 2ri+1} = +� 1 +12 + 2 +12, 1 +4 + 2 +12 +� +has the minimum g = 1 +4 and maximum g′ = 5 +12. +Then η2,3[S](t) = ηBR has the following corners +as expected in the second picture of Fig. 7: +�s +2, 0 +� += +�1 +6, 0 +� +, +�g + s +2 +, g +� += +�1 +2 +�1 +4 + 1 +3 +� +, 1 +4 +� += +� 7 +24, 1 +4 +� +, +�s + g′ +2 +, g +� += +�1 +2 +�1 +3 + 5 +12 +� +, 1 +4 +� += +�3 +8, 1 +4 +� +, +�g + s + g′ +2 +, 0 +� += +�1 +2(1 +4 + 1 +3 + 5 +12), 0 +� += +�1 +2, 0 +� +. +Case (RG). For the trapezoid function ηRG mea- +suring the double intersections of the red and +green intervals centered at p1 = pi and p2 = +pi+k−1, we set k = 2 and i = 1. Then we have +the radii ri = 1 +12 and ri+1 = 0, the gaps gi = 1 +3, + +Springer Nature 2021 LATEX template +12 +Density functions of periodic sequences +gi+1 = 1 +4, gi+2 = 1 +12, and s = gi+1 = 1 +4. The pair +{gi + 2ri, gi+2 + 2ri+1} = +�1 +3 + 2 +12, 1 +12 + 0 +� +has the minimum g = 1 +12 and maximum g′ = 1 +2. +Then η2,1[S](t) = ηRG has the following corners: +�s +2, 0 +� += +�1 +8, 0 +� +, +�g + s +2 +, g +� += +�1 +2 +� 1 +12 + 1 +4 +� +, 1 +12 +� += +�1 +6, 1 +12 +� +, +�s + g′ +2 +, g +� += +�1 +2 +�1 +4 + 1 +2 +� +, 1 +12 +� += +�3 +8, 1 +12 +� +, +�g + s + g′ +2 +, 0 +� += +�1 +2( 1 +12 + 1 +4 + 1 +2), 0 +� += +� 5 +12, 0 +� +. +as expected in the third picture of Fig. 7. +■ +6 Properties of new densities +This section proves the periodicity of the sequence +ψk with respect to the index k ≥ 0 in Theorem 6.2, +which was a bit unexpected from original Defini- +tion 1.2. We start with the simpler example for +the familiar 3-point sequence in Fig. 3. +Example 6.1 (periodicity of ψk in the index k). +Let the periodic sequence S = +� +0, 1 +3, 1 +2 +� ++Z have +three points p1 = 0, p2 = 1 +3, p3 = 1 +2 of radii +r1 = 1 +12, r2 = 0, r3 = 1 +12, respectively. The ini- +tial intervals L1(0) = [− 1 +12, 1 +12], L2(0) = [ 1 +3, 1 +3], +L3(0) = [ 5 +12, 7 +12] have the 0-fold intersection mea- +sured by ψ0(0) = 2 +3 and the 1-fold intersection +measured by ψ1(0) = 1 +3, see Fig. 4 and 5. +By the time t = 1 +2 the initial intervals will grow +to L1( 1 +2) = [− 7 +12, 7 +12], L2( 1 +2) = [− 1 +6, 5 +6], L3( 1 +2) = +[− 1 +12, 13 +12]. The grown intervals at the radius t = 1 +2 +have the 3-fold intersection [− 1 +12, 7 +12] of the length +ψ3( 1 +2) = 2 +3, which coincides with ψ0(0) = 2 +3. +With the extra interval L4( 1 +2) = [ 5 +12, 19 +12] cen- +tered at p4 = 1, the 4-fold intersection is L1 ∩ +L2 ∩ L3 ∩ L4 = [ 5 +12, 7 +12]. With the extra inter- +val L5( 1 +2) = [ 5 +6, 11 +6 ] centered at p5 = +4 +3, the +4-fold intersection L2 ∩ L3 ∩ L4 ∩ L5 is the single +point 5 +6. With the extra interval L6( 1 +2) = [ 11 +12, 13 +12] +centered at p6 = 3 +2, the 4-fold intersection is +L3∩L4∩L5∩L6 = [ 11 +12, 13 +12]. Hence the total length +of the 4-fold intersection at t = 1 +2 is ψ4( 1 +2) = 1 +3, +which coincides with ψ1(0) = 1 +3. +For the larger t = 1, the six grown intervals +L1(1) = +� +−13 +12, 13 +12 +� +, L2(1) = +� +−2 +3, 4 +3 +� +, +L3(1) = +� +− 7 +12, 19 +12 +� +, L4(1) = +� +− 1 +12, 25 +12 +� +, +L5(1) = +�1 +3, 7 +3 +� +, +L6(1) = +� 5 +12, 31 +12 +� +have the 6-fold intersection +� 5 +12, 13 +12 +� +of length +ψ6(1) = 2 +3 coinciding with ψ0(0) = ψ3( 1 +2) = 2 +3. ■ +Corollary 6.2 proves that the coincidences in +Example 6.1 are not accidental. The periodicity of +ψk with respect to k is illustrated by Fig. 8. +Theorem 6.2 (periodicity of ψk in the index k). +The density functions ψk[S] of a periodic sequence +S = {p1, . . . , pm} + Z consist of disjoint intervals +with centers 0 ≤ p1 < · · · < pm < 1 and radii +r1, . . . , rm ≥ 0, respectively, satisfy the periodicity +ψk+m(t + 1 +2) = ψk(t) for any k ≥ 0 and t ≥ 0. +■ +Proof Since the initial intervals are disjoint, for k ≥ 0, +any (k +m)-fold intersection involves k +m successive +intervals Li(t), . . . , Li+k+m−1(t) centered around the +points of S. Then we can find an interval [x, x + 1] +covering exactly m of these initial intervals of S. +By collapsing [x, x+1] to the point x, any (k+m)- +fold intersection of k + m intervals grown by a radius +r ≥ 1 +2 becomes a k-fold intersection of k intervals + +Springer Nature 2021 LATEX template +Density functions of periodic sequences +13 +Fig. 8 The densities ψk, k = 0, . . . , 9 for the 1-period sequence S whose points 0, 1 +3 , 1 +2 have radii 1 +12 , 0, 1 +12 , respectively. The +densities ψ0, ψ1, ψ2 are described in Examples 3.1, 4.1, 5.1 and determine all other densities by periodicity in Theorem 6.2. +grown by t = r− 1 +2. Both k-fold and (k+m)-fold inter- +sections within any unit cell have the same fractional +length, so ψk+m(t + 1 +2) = ψk(t) for any t ≥ 0. +□ +The symmetry ψm−k( 1 +2 − t) = ψk(t) for k = +0, . . . , [ m +2 ], and t ∈ [0, 1 +2] from [3, Theorem 8] +no longer holds for points with different radii. +For example, ψ1(t) ̸= ψ2( 1 +2 − t) for the periodic +sequence S = +� +0, 1 +3, 1 +2 +� ++ Z, see Fig. 5, 7. If +all points have the same radius r, [3, Theorem 8] +implies the symmetry after replacing t by t + 2r. +The main results of [3] implied that all den- +sity functions cannot distinguish the non-isometric +sequences S15 = {0, 1, 3, 4, 5, 7, 9, 10, 12} + 15Z +and Q15 = {0, 1, 3, 4, 6, 8, 9, 12, 14}+15Z of points +with zero radii. Example 6.3 shows that the den- +sities for sequences with non-zero radii are strictly +stronger and distinguish the sequences S15 ̸∼= Q15. +Example 6.3 (ψk for S15, Q15 with neighbor +radii). For any point p in a periodic sequence S ⊂ +R, define its neighbor radius as the half-distance +to a closest neighbor of p within the sequence S. +This choice of radii respects the isometry in the +sense that periodic sequences S, Q with zero-sized +radii are isometric if and only if S, Q with neighbor +radii are isometric. Fig. 9 shows that the densi- +ties ψk for k ≥ 2 distinguish the non-isometric +sequences S15 and Q15 scaled down by factor 15 +to the unit cell [0, 1], see Example 2.1. +■ +Corollary +6.4 +(computation +of +ψk(t)). Let +S, Q ⊂ R be periodic sequences with at most m +motif points. For k ≥ 1, one can draw the graph +of the k-th density function ψk[S] in time O(m2). +One can check in time O(m3) if Ψ[S] = Ψ[Q]. +■ +Proof To draw the graph of ψk[S] or evaluate the k- +th density function ψk[S](t) at any radius t, we first +use the periodicity from Theorem 6.2 to reduce k to +the range 0, 1, . . . , m. In time O(m log m) we put the +points from a unit cell U (scaled to [0, 1] for conve- +nience) in the increasing (cyclic) order p1, . . . , pm. In +time O(m) we compute the gaps gi = (pi−ri)−(pi−1+ +ri−1) between successive intervals. +For k = 0, we put the gaps in the increasing order +g[1] ≤ · · · ≤ g[m] in time O(m log m). By Theorem 3.2 +in time O(m2), we write down the O(m) corner points +whose horizontal coordinates are the critical radii +where ψ0(t) can change its gradient. +We evaluate ψ0 at every critical radius t by sum- +ming up the values of m trapezoid functions at t, which +needs O(m2) time. It remains to plot the points at all + +0.8 +psi 0 +0.6 +psi_1 +psi_2 +psi_3 +psi_4 +s +0.4 - +p +psi_5 +psi_6 +psi_7 +0.2 +psi_8 +psi_9 +0.0 +0.0 +0.5 +1.0 +1.5 +TSpringer Nature 2021 LATEX template +14 +Density functions of periodic sequences +Fig. 9 The densities ψk, k = 0, . . . , 10, distinguish (already for k ≥ 2) the sequences (scaled down by period 15) S15 = +{0, 1, 3, 4, 5, 7, 9, 10, 12} + 15Z (top) and Q15 = {0, 1, 3, 4, 6, 8, 9, 12, 14} + 15Z (bottom), where the radius ri of any point +is the half-distance to its closest neighbor. These sequences with zero radii have identical ψk for all k, see [3, Example 10]. +O(m) critical radii t and connect the successive points +by straight lines, so the total time is O(m2). +For any larger fixed index k = 1, . . . , m, in time +O(m2) we write down all O(m) corner points from +Theorems 4.2 and 5.2, which leads to the graph of +ψk(t) similarly to the above argument for k = 0. +To decide if the infinite sequences of density func- +tions coincide: Ψ[S] = Ψ[Q], by Theorem 6.2 it suffices +to check only if O(m) density functions coincide: +ψk[S](t) = ψk[Q](t) for k = 0, 1, . . . , [ m +2 ]. +To check if two piecewise linear functions coincide, +it remains to compare their values at all O(m) critical +radii t from the corner points in Theorems 3.2, 4.2, 5.2. +Since these values were found in time O(m2) above, +the total time for k = 0, 1, . . . , [ m +2 ] is O(m3). +□ + +0.75 +psi_0 +psi_1 +psi_2 +psi_3 +0.50 +K +psi_4 +psi +psi_5 +psi_6 +psi_7 +0.25 +psi_8 +psi_ 9 +psi_10 +0.00 +0.0 +0.2 +0.4 +0.6 +T0.75 +psi_ 0 +psi_1 +psi_2 +psi_3 +0.50 +K +psi_4 +psi_5 +psi_6 +psi_7 +0.25 +psi_8 +psi_ 9 +psi_10 +0.00 +0.0 +0.2 +0.4 +0.6 +TSpringer Nature 2021 LATEX template +Density functions of periodic sequences +15 +All previous examples show densities with a +single local maximum. However, the new R code +[5] helped us discover the opposite examples. +Fig. 10 For the periodic sequence S = +� +0, 1 +8 , 1 +4 , 3 +4 +� ++ Z +whose all points have radii 0, the 2nd density ψ2[S](t) has +the local minimum at t = 1 +4 between two local maxima. +Example 6.5 (densities with multiple maxima). +Fig. 10 shows a simple 4-point sequence S whose +2nd density ψ2[S] has two local maxima. Fig. 11 +and 12 show more complicated sequences whose +density functions have more than two maxima. ■ +Fig. 11 For the sequence S = +� +0, 1 +81 , 1 +27 , 1 +9 , 1 +3 +� ++Z whose +all points have radii 0, ψ2[S] equal to the sum of the shown +five trapezoid functions has three maxima. +Fig. 12 For the sequence S = +� +0, 1 +64 , 1 +16 , 1 +8 , 1 +4 , 3 +4 +� ++ Z +whose all points have radii 0, ψ3[S] has 5 local maxima. +7 Conclusions and future work +In comparison with the past work [3], the key +contributions of this paper are the following. +• Definition 1.2 extends density functions ψk to +any periodic sets of points with radii ri ≥ 0. +• Theorems 3.2, 4.2, 5.2 explicitly describe all ψk +for any periodic sequence S of points with radii. +• The descriptions of ψk allowed us to justify the +periodicity of ψk in Theorem 6.2 and a quadratic +algorithm computing any ψk in Corollary 6.4. +• The code [5] helped us distinguish S15 ̸∼= Q15 in +Example 6.3 and find sequences whose densities +have multiple local maxima in Example 6.5. +Here are the open problems for future work. +• Verify if density functions ψk[S](t) for small +values of k distinguish all non-isometric periodic +point sets S ⊂ Rn at least with radii 0. +• Characterize the periodic sequences S ⊂ R +whose all density functions ψk for k ≥ 1 have a +unique local maximum, not as in Example 6.5. +• Similar to Theorems 3.2, 4.2, 5.2, analytically +describe the density function ψk[S] for periodic +point sets S ⊂ Rn in higher dimensions n > 1. +This research was supported by the grants of +the UK Engineering Physical Sciences Research + +1.00 +0.75 +psi_0 +K +0.50 +psi_1 +Isd +psi_2 +psi_3 +0.25 +0.00 +0.0 +0.1 +0.2 +0.3 +0.4 +0.5 +t0.12 +- +0.08 +eta_2_1 +eta_2_2 +eta_2_3 +.2 +eta_2_4 +eta_2_5 +psi_2 +0.04 - +0.00 - +0.0 +0.1 +0.2 +0.3 +0.4 +0.5 +t0.09- +eta_3_1 +eta_3_2 +3 +0.06 +eta_3_3 +eta +eta_3_4 +3 +eta_3_5 +eta_3_6 +eta_3_7 +psi_3 +0.03 - +0.00- +0.0 +0.1 +0.2 +0.3 +0.4 +tSpringer Nature 2021 LATEX template +16 +Density functions of periodic sequences +Council (EP/R018472/1, EP/X018474/1) and the +Royal Academy of Engineering Industrial Fellow- +ship (IF2122/186) of the last author. We thank all +reviewers for their time and helpful advice. +References +[1] Anosova, +O., +Kurlin, +V.: +Introduction +to +periodic +geometry +and +topology. +arxiv:2103.02749 (2021) +[2] Anosova, O., Kurlin, V.: An isometry clas- +sification of periodic point sets. In: Lecture +Notes in Computer Science (Proceedings of +DGMM). vol. 12708, pp. 229–241 (2021) +[3] Anosova, O., Kurlin, V.: Density functions of +periodic sequences. In: Lecture Notes in Com- +puter Science (Proceedings of DGMM). vol. +13493, pp. 395–408 (2022) +[4] Anosova, O., Kurlin, V.: Recognition of near- +duplicate periodic patterns in polynomial +time. arxiv:2205.15298 (2022) +[5] Anosova, O.: R code for density functions +of periodic sequences (2023), https://github. +com/oanosova/DensityFunctions1D +[6] Bright, M., Cooper, A.I., Kurlin, V.: Wel- +come to a continuous world of 3-dimensional +lattices. arxiv:2109.11538 (2021) +[7] Bright, +M., +Cooper, +A.I., +Kurlin, +V.: +Geographic-style maps for 2-dimensional lat- +tices. +Acta +Crystallographica +Section +A +79(1), 1–13 (2023) +[8] Edelsbrunner, H., Heiss, T., Kurlin, V., +Smith, P., Wintraecken, M.: The density fin- +gerprint of a periodic point set. In: SoCG. +vol. 189, pp. 32:1–32:16 (2021) +[9] Gr¨unbaum, F., Moore, C.: The use of higher- +order invariants in the determination of +generalized patterson cyclotomic sets. Acta +Cryst. A 51, 310–323 (1995) +[10] Kurlin, V.: A complete isometry classification +of 3D lattices. arxiv:2201.10543 (2022) +[11] Kurlin, V.: Computable complete invari- +ants for finite clouds of unlabeled points. +arxiv:2207.08502 (2022), http://kurlin.org/ +projects/complete-isometry-invariants.pdf +[12] Kurlin, V.: Exactly computable and contin- +uous metrics on isometry classes of finite +and 1-periodic sequences. arXiv:2205.04388 +(2022), +http://kurlin.org/projects/ +periodic-geometry-topology/metric1D.pdf +[13] Kurlin, V.: Mathematics of 2-dimensional lat- +tices. Foundations of Computational Math- +ematics (2022), http://kurlin.org/projects/ +lattice-geometry/lattices2Dmaths.pdf +[14] Mosca, M., Kurlin, V.: Voronoi-based sim- +ilarity distances between arbitrary crystal +lattices. Crystal Research and Technology +55(5), 1900197 (2020) +[15] Pozdnyakov, S., et al.: Incompleteness of +atomic structure representations. Phys. Rev. +Let. 125, 166001 (2020) +[16] Smith, P., Kurlin, V.: A practical algo- +rithm for degree-k Voronoi domains of three- +dimensional periodic point sets. In: Lecture +Notes in Computer Science (Proceedings of +ISVC). vol. 13599, pp. 377–391 (2022) +[17] Smith, P., Kurlin, V.: Families of point +sets +with +identical +1D +persistence,. +arxiv:2202.00577 +(2022), +http://kurlin. +org/projects/periodic-geometry-topology/ +trivial-persistence.pdf +[18] Widdowson, +D., +Kurlin, +V.: +Pointwise +distance +distributions +of +periodic +sets. +arXiv:2108.04798 (version 1) (2021) +[19] Widdowson, +D., +Kurlin, +V.: +Resolving +the data ambiguity for periodic crystals. +Advances in Neural Information Process- +ing +Systems +(arXiv:2108.04798, +v2) +35 +(2022), http://kurlin.org/projects/periodic+ +geometry/NeurIPS2022PDD.pdf +[20] Widdowson, D., et al.: Average minimum dis- +tances of periodic point sets. MATCH Comm. +Math. Comp. Chemistry 87, 529–559 (2022) + diff --git a/MNE4T4oBgHgl3EQfiw29/content/tmp_files/load_file.txt b/MNE4T4oBgHgl3EQfiw29/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..c193e6d0d288e15757dfaf471f90fa78bebb04c6 --- /dev/null +++ b/MNE4T4oBgHgl3EQfiw29/content/tmp_files/load_file.txt @@ -0,0 +1,760 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf,len=759 +page_content='Springer Nature 2021 LATEX template Density functions of periodic sequences of continuous events Olga Anosova1 and Vitaliy Kurlin1* 1*Computer Science, University of Liverpool, Ashton street, Liverpool, L69 3BX, UK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Corresponding author(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' E-mail(s): vitaliy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='kurlin@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='com;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Contributing authors: oanosova@liverpool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='uk;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Abstract Periodic Geometry studies isometry invariants of periodic point sets that are also continuous under perturbations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The motivations come from periodic crystals whose structures are determined in a rigid form but any minimal cells can discontinuously change due to small noise in measure- ments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For any integer k ≥ 0, the density function of a periodic set S was previously defined as the fractional volume of all k-fold intersections (within a minimal cell) of balls that have a variable radius t and centers at all points of S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' This paper introduces the density functions for periodic sets of points with different initial radii motivated by atomic radii of chemical elements and by continuous events occupying disjoint intervals in time series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The contributions are explicit descriptions of the densities for periodic sequences of intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The new densities are strictly stronger and distinguish periodic sequences that have identical densities in the case of zero radii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Keywords: computational geometry, periodic set, periodic time series, isometry invariant, density function MSC Classification: 68U05 , 51K05 , 51N20 , 51F30 , 51F20 1 Motivations for the density functions of periodic sets This work substantially extends the previous conference paper [3] in Discrete Geometry and Mathematical Morphology 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The past work explicitly described the density functions for peri- odic sequences of zero-sized points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The new work extends these analytic descriptions to periodic sequences whose points have non-negative radii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The proposed extension to the weighted case is motivated by crystallography and materials chem- istry [1] because all chemical elements have differ- ent atomic radii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' In dimension 1, the key motiva- tion is the study of periodic time series consisting of continuous and sequential (non-overlapping) events represented by disjoint intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Any such interval [a, b] ⊂ R for a ≤ b is the one-dimensional ball with the center a + b 2 and radius b − a 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The point-set representation of periodic crys- tals is the most fundamental mathematical model for crystalline materials because nuclei of atoms are well-defined physical objects, while chemical bonds are not real sticks or strings but abstractly represent inter-atomic interactions depending on many thresholds for distances and angles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Since crystal structures are determined in a rigid form, their most practical equivalence is rigid motion (a composition of translations and rota- tions) or isometry that maintains all inter-point distances and includes also mirror reflections [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Now we introduce the key concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Let Rn be Euclidean space, Z be the set of all integers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='05137v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='CG] 12 Jan 2023 Springer Nature 2021 LATEX template 2 Density functions of periodic sequences 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='8 1 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='8 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='6 ψkA(t) Radius of Balls ψ0A ψ1A ψ2A ψ3A ψ4A ψ5A ψ6A ψ7A ψ8A 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='8 1 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='8 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='6 Σ1nψkA(t) Radius of Balls n = 1 n = 2 n = 3 n = 4 n = 5 n = 6 n = 7 n = 8 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 1 Illustration of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 for the hexagonal lattice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Left: subregions Uk(t) are covered by k disks for the radii t = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='25, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='55, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='75, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Right: the densities ψk are above the corresponding densigram of accumulated functions k � i=1 ψi(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 (a lattice Λ, a unit cell U, a motif M, a periodic point set S = M + Λ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For any linear basis v1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , vn of Rn, a lattice is Λ = { n� i=1 civi : ci ∈ Z}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The unit cell U(v1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , vn) = � n� i=1 civi : ci ∈ [0, 1) � is the par- allelepiped spanned by the basis above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' A motif M ⊂ U is any finite set of points p1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , pm ∈ U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' A periodic point set [20] is the Minkowski sum S = M + Λ = {u + v | u ∈ M, v ∈ Λ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ In dimension n = 1, a lattice is defined by any non-zero vector v ∈ R, any periodic point set S is a periodic sequence {p1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , pm} + vZ with the period v equal to the length of the vector v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 (density functions for periodic sets of points with radii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Let a periodic set S = Λ + M ⊂ Rn have a unit cell U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For every point p ∈ M, fix a radius r(p) ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For any integer k ≥ 0, let Uk(t) be the region within the cell U covered by exactly k closed balls ¯B(p;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' r(p) + t) for t ≥ 0 and all points p ∈ M and their transla- tions by Λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The k-th density function ψk[S](t) = Vol[Uk(t)]/Vol[U] is the fractional volume of the k-fold intersections of these balls within U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ The density ψk[S](t) can be interpreted as the probability that a random (uniformly chosen in U) point q is at a maximum distance t to exactly k balls with initial radii r(p) and all centers p ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For k = 0, the 0-th density ψ0[S](t) mea- sures the fractional volume of the empty space not covered by any expanding balls ¯B(p;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' r(p) + t) In the simplest case of radii r(p) = 0, the infi- nite sequence Ψ[S] = {ψk(t)}+∞ k=0 was called in [8, section 3] the density fingerprint of a periodic point set S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For k = 1 and small t > 0 while all equal-sized balls ¯B(p;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' t) remain disjoint, the 1st density ψ1[S](t) increases proportionally to tn but later reaches a maximum and eventually drops back to 0 when all points of Rn are covered of by at least two balls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' See the densities ψk, k = 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , 8 for the square and hexagonal lattices in [8, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The original densities helped find a missing crystal in the Cambridge Structural Database, which was accidentally confused with a slight per- turbation (measured at a different temperature) of another crystal (polymorph) with the same chemical composition, see [8, section 7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The new weighted case with radii r(p) ≥ 0 in Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 is even more practically important due to different Van der Waals radii, which are individually defined for all chemical elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The key advantage of density functions over other isometry invariants of periodic crystals Springer Nature 2021 LATEX template Density functions of periodic sequences 3 (such as symmetries or conventional representa- tions based on a geometry of a minimal cell) is their continuity under perturbations, see details in section 2 reviewing the related past work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The only limitation is the infinite size of den- sities ψk(t) due to the unbounded parameters: integer index k ≥ 0 and continuous radius t ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' We state the following problem in full general- ity to motivate future work on these densities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Problem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='3 (computation of ψk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Verify if the density functions ψk[S](t) from Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 can be computed in a polynomial time (in the size m of a motif of S) for a fixed dimension n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ The main contribution is the full solution of Problem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='3 for n = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Theorems 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, and Corollary 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='4 efficiently compute all ψk[S](t) depending on infinitely many values of k and t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 2 Review of related past work Periodic Geometry was initiated in 2020 by the problem [14, section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='3] to design a computable metric on isometry classes of lattices, which is continuous under perturbations of a lattice basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Though a Voronoi domain is combinatorially unstable under perturbations, its geometric shape was used to introduce two continuous metrics [14, Theorems 2, 4] requiring approximations due to a minimization over infinitely many rotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Similar minimizations over rotations or other continuous parameters are required for the com- plete invariant isosets [2, 4] and density functions, which can be practically computed in low dimen- sions [16], whose completeness was proved for generic periodic point sets in R3 [8, Theorem 2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The density fingerprint Ψ[S] turned out to be incomplete [8, section 5] in the example below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Example 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 (periodic sequences S15, Q15 ⊂ R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Widdowson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' [20, Appendix B] discussed homometric sets that can be distinguished by the invariant AMD (Average Minimum Distances) and not by diffraction patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The sequences S15 = {0, 1, 3, 4, 5, 7, 9, 10, 12} + 15Z, Q15 = {0, 1, 3, 4, 6, 8, 9, 12, 14} + 15Z have the unit cell [0, 15] shown as a circle in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 2 Circular versions of the periodic sets S15, Q15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' These periodic sequences [9] are obtained as Minkowski sums S15 = U + V + 15Z and Q15 = U − V + 15Z for U = {0, 4, 9}, V = {0, 1, 3}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ For rational-valued periodic sequences, [9, Theorem 4] proved that r-th order invariants (combinations of r-factor products) up to r = 6 are enough to distinguish such sequences up to a shift (a rigid motion of R without reflections).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The AMD invariant was extended to the Point- wise Distance Distribution (PDD), whose generic completeness [19, Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='4] was proved in any dimension n ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' However there are finite sets in R3 [15, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' S4] with the same PDD, which were distinguished by more sophisticated distance-based invariants in [18, appendix C].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The subarea of Lattice Geometry devel- oped continuous parameterizations for the moduli spaces of lattices considered up to isometry in dimension two [7, 13] and three [6, 10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For 1-periodic sequences of points in Rn, com- plete isometry invariants with continuous and computable metrics appeared in [12], see related results for finite clouds of unlabeled points [11, 17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 3 The 0-th density function ψ0 This section proves Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 explicitly describ- ing the 0-th density function ψ0[S](t) for any periodic sequence S ⊂ R of disjoint intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For convenience, scale any periodic sequence S to period 1 so that S is given by points 0 ≤ p1 < · · · < pm < 1 with radii r1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , rm, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Since the expanding balls in R are growing intervals, volumes of their intersections linearly change with respect to the variable radius t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Hence any density function ψk(t) is piecewise linear and uniquely determined by corner points (aj, bj) where the gradient of ψk(t) changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Springer Nature 2021 LATEX template 4 Density functions of periodic sequences To prepare the proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, we first consider Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 for the simple sequence S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 (0-th density function ψ0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Let the periodic sequence S = � 0, 1 3, 1 2 � + Z have three points p1 = 0, p2 = 1 3, p3 = 1 2 of radii r1 = 1 12, r2 = 0, r3 = 1 12, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 3 shows each point pi and its growing interval Li(t) = [(pi−ri)−t, (pi+ri)+t] of the length 2ri+2t for i = 1, 2, 3 in its own color: red, green, blue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' By Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 each density function ψk[S](t) measures a fractional length covered by exactly k intervals within the unit cell [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' We periodicaly map the endpoints of each growing interval to the unit cell [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For instance, the interval [− 1 12 − t, 1 12 + t] of the point p1 = 0 ≡ 1 (mod 1) maps to the red intervals [0, 1 12 +t]∪[11 12 − t, 1] shown by solid red lines in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The same image shows the green interval [1 3 − t, 1 3 + t] by dashed lines and the blue interval [ 5 12 − t, 7 12 + t] by dotted lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' At the moment t = 0, since the starting inter- vals are disjoint, they cover the length l = 2( 1 12 + 0 + 1 12) = 1 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The non-covered part of [0, 1] has length 1 − 1 3 = 2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' So the graph of ψ0(t) at t = 0 starts from the point (0, 2 3), see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' At the first critical moment t = 1 24 when the green and blue intervals collide at p = 3 8, only the intervals [1 8, 7 24] ∪ [5 8, 7 8] of total length 5 12 remain uncovered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Hence ψ0(t) linearly drops to the point ( 1 12, 5 12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' At the next critical moment t = 1 8 when the red and green intervals collide at p = 5 24, only the interval [17 24, 19 24] of length 1 12 remain uncovered, so ψ0(t) continues to (1 8, 1 12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The graph of ψ0(t) finally returns to the t-axis at the point (1 6, 0) and remains there for t ≥ 1 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The piecewise linear behavior of ψ0(t) can be described by specifying the corner points in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 4: � 0, 2 3 � , � 1 24, 5 12 � , �1 8, 1 12 � , �1 6, 0 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 extends Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 to any peri- odic sequence S and implies that the 0-th density function ψ0(t) is uniquely determined by the ordered gap lengths between successive intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 (description of ψ0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Let a periodic sequence S = {p1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , pm} + Z consist of disjoint intervals with centers 0 ≤ p1 < · · · < pm < 1 and radii r1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , rm ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Consider the total length l = 2 m � i=1 ri and gaps between successive intervals gi = (pi − ri) − (pi−1 + ri−1), where i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , m and p0 = pm − 1, r0 = rm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Put the gaps in increasing order: g[1] ≤ g[2] ≤ · · · ≤ g[m].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then the 0-th density ψ0[S](t) is piecewise linear with the following (unordered) corner points: (0, 1 − l) and � g[i] 2 , 1 − l − i−1 � j=1 g[j] − (m − i + 1)g[i] � for i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , m, so the last corner is �g[m] 2 , 0 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' If any corners are repeated, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' when g[i−1] = g[i], these corners are collapsed into one corner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ Proof By Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 the 0-th density function ψ0(t) measures the total length of subintervals in the unit cell [0, 1] that are not covered by any of the grow- ing intervals Li(t) = [pi−ri−t, pi+ri+t], i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For t = 0, since all initial intervals Li(0) are disjoint, they cover the total length 2 m � i=1 ri = l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then the graph of ψ0(t) at t = 0 starts from the point (0, 1 − l).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' So ψ0(t) linearly decreases from the initial value ψ0(0) = 1 − l except for m critical values of t where one of the gap intervals [pi + ri + t, pi+1 − ri+1−t] between successive growing intervals Li(t) and Li+1(t) shrinks to a point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' These critical radii t are ordered according to the gaps g[1] ≤ g[2] ≤ · · · ≤ g[m].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The first critical radius is t = 1 2g[1], when a shortest gap interval of the length g[1] is covered by the growing successive intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' At this moment Springer Nature 2021 LATEX template Density functions of periodic sequences 5 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 3 The sequence S = � 0, 1 3 , 1 2 � + Z has the points of weights 1 12 , 0, 1 12 , respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The growing intervals around the red point 0 ≡ 1 (mod 1), green point 1 3 , blue point 1 2 have the same color for various radii t, see Examples 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1, 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' t = 1 2g[1], all m growing intervals Li(t) have the total length l + mg[1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then the 0-th density ψ0(t) has the first corner points (0, 1−l) and �g[1] 2 , 1 − l − mg[1] � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The second critical radius is t = g[2] 2 , when all intervals Li(t) have the total length l + g[1] + (m − 1)g[2], i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' the next corner point is �g[2] 2 , 1 − l − g[1] − (m − 1)g[2] � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' If g[1] = g[2], then both corner points coincide, so ψ0(t) will continue from the joint corner point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The above pattern generalizes to the i-th critical radius t = 1 2g[i], when all covered intervals have the total length i−1 � j=1 g[j] (for the fully covered intervals) plus (m − i + 1)g[i] (for the still growing intervals).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For the final critical radius t = g[m] 2 , the whole unit cell [0, 1] is covered by the grown intervals because m � j=1 g[j] = 1 − l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The final corner is ( g[m] 2 , 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' □ Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='3 applies Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 to get ψ0 found for the periodic sequence S in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 1/3Springer Nature 2021 LATEX template 6 Density functions of periodic sequences Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 4 The 0-th density function ψ0(t) for the 1-period sequence S whose points 0, 1 3 , 1 2 have radii 1 12 , 0, 1 12 , respectively, see Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='3 (using Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The sequence S = � 0, 1 3, 1 2 � + Z in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 with points p1 = 0, p2 = 1 3, p3 = 1 2 of radii r1 = 1 12, r2 = 0, r3 = 1 12, respectively, has l = 2(r1 + r2 + r3) = 1 3 and the initial gaps between successive intervals g1 = p1 − r1 − p3 − r3 = (1 − 1 12) − (1 2 + 1 12) = 1 3, g2 = p2 − r2 − p1 − r1 = (1 3 − 0) − (0 + 1 12) = 1 4, g3 = p3 − r3 − p2 − r2 = (1 2 − 1 12) − (1 3 + 0) = 1 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Order the gaps: g[1] = 1 12 < g[2] = 1 4 < g[3] = 1 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 1 − l = 1 − 1 3 = 2 3, 1 − l − 3g[1] = 2 3 − 3 12 = 5 12, 1 − l − g[1] − 2g[2] = 2 3 − 1 12 − 2 4 = 1 12, 1 − l − g[1] − g[2] − g[3] = 2 3 − 1 12 − 1 4 − 1 3 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' By Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 ψ0(t) has the corner points (0, 1 − l) = � 0, 2 3 � , �1 2g[1], 1 − l − 3g[1] � = � 1 24, 5 12 � , �1 2g[2], 1 − l − g[1] − 2g[2] � = �1 8, 1 12 � , �1 2g[3], 1 − l − g[1] − g[2] − g[3] � = �1 6, 0 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' See the graph of the 0-th density ψ0(t) in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ By Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 any 0-th density function ψ0(t) is uniquely determined by the (unordered) set of gap lengths between successive intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Hence we can re-order these intervals with- out changing ψ0(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For instance, the periodic sequence Q = {0, 1 2, 2 3} + Z with points 0, 1 2, 2 3 of weights 1 12, 1 12, 0 has the same set ordered gaps g[1] = 1 12, d[2] = 1 3, d[3] = 1 2 as the periodic sequence S = � 0, 1 3, 1 2 � + Z in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The above sequences S, Q are related by the mirror reflection t �→ 1 − t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' One can eas- ily construct many non-isometric sequences with ψ0[S](t) = ψ0[Q](t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For any 1 ≤ i ≤ m − 3, the sequences Sm,i = {0, 2, 3, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , i + 2, i + 4, i + 5, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , m + 2} + (m + 2)Z have the same interval lengths d[1] = · · · = d[m−2] = 1, d[m−1] = d[m] = 2 but are not related by isometry (translations and reflections in R) because the intervals of length 2 are separated by i−1 intervals of length 1 in Sm,i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 4 The 1st density function ψ1 This section proves Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 explicitly describ- ing the 1st density ψ1[S](t) for any periodic sequence S of disjoint intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' To prepare the proof of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 finds ψ1[S] for the sequence S from Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 (ψ1 for S = � 0, 1 3, 1 2 � + Z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The 1st density function ψ1(t) can be obtained as a sum of the three trapezoid functions ηR, ηG, ηB, each measuring the length of a region covered by a single interval of one color, see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' At the initial moment t = 0, the red intervals [0, 1 12] ∪ [11 12, 1] have the total length ηR(0) = 1 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' These red intervals [0, 1 12 + t] ∪ [11 12 − t, 1] for t ∈ [0, 1 8] grow until they touch the green interval [ 7 24, 3 8] and have the total length ηR(1 8) = 1 6 + 2 8 = Springer Nature 2021 LATEX template Density functions of periodic sequences 7 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 5 The trapezoid functions ηR, ηG, ηB and the 1st density function ψ1(t) for the 1-period sequence S whose points 0, 1 3 , 1 2 have radii 1 12 , 0, 1 12 , see Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 5 12 in the second picture of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' So the graph of the red length ηR(t) linearly grows with gradient 2 from the point (0, 1 6) to the corner point (1 8, 5 12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For t ∈ [1 8, 1 6], the left red interval is shrink- ing at the same rate (due to the overlapping green interval) as the right red interval continues to grow until t = 1 6, when it touches the blue interval [1 4, 3 4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Hence the graph of ηR(t) remains constant for t ∈ [1 8, 1 6] up to the corner point (1 6, 5 12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' After that, the graph of ηR(t) linearly decreases (with gradient −2) until all red intervals are fully covered by the green and blue intervals at moment t = 3 8, see the 6th picture in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Hence the trapezoid function ηR has the piece- wise linear graph through the corner points (0, 1 6), (1 8, 5 12), (1 6, 5 12), (3 8, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' After that, ηR(t) = 0 remains constant for t ≥ 3 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 5 shows the graphs of ηR, ηG, ηB and ψ1 = ηR + ηG + ηB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 extends Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 and proves that any ψ1(t) is a sum of trapezoid functions whose corners are explicitly described.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' We con- sider any index i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , m (of a point pi or a gap gi) modulo m so that m + 1 ≡ 1 (mod m).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 (description of ψ1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Let a periodic sequence S = {p1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , pm} + Z consist of disjoint intervals with centers 0 ≤ p1 < · · · < pm < 1 and radii r1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , rm ≥ 0, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Consider the gaps gi = (pi−ri)−(pi−1+ri−1), where i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , m and p0 = pm − 1, r0 = rm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then the 1st density ψ1(t) is the sum of m trapezoid functions ηi, i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , m, with the corners (0, 2ri), �gi 2 , g + 2ri � , �gi+1 2 , g + 2ri � , �gi + gi+1 2 + ri, 0 � , where g = min{gi, gi+1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Hence ψ1(t) is determined by the unordered set of unordered pairs (gi, gi+1), i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ Proof The 1st density ψ1(t) equals the total length of subregions covered by exactly one of the intervals RSpringer Nature 2021 LATEX template 8 Density functions of periodic sequences Li(t) = [pi − ri − t, pi + ri + t], i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , m, where all intervals are taken modulo 1 within [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Hence ψ1(t) is the sum of the functions η1i, each measuring the length of the subinterval of Li(t) not covered by other intervals Lj(t), j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , m}−{i}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Since the initial intervals Li(0) are disjoint, each function η1i(t) starts from the value η1i(0) = 2ri and linearly grows (with gradient 2) up to ηi(1 2g) = 2ri+g, where g = min{gi, gi+1}, when the growing interval Li(t) of the length 2ri+2t = 2ri+g touches its closest neighboring interval Li±1(t) with a shortest gap g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' If (say) gi < gi+1, then the subinterval covered only by Li(t) is shrinking on the left and is grow- ing at the same rate on the right until Li(t) touches the growing interval Li+1(t) on the right.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' During this growth, when t is between 1 2gi and 1 2gi+1, the trapezoid function ηi(t) = g remains constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' If gi = gi+1, this horizontal line collapses to one point in the graph of ηi(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For t ≥ max{gi, gi+1}, the subinterval covered only by Li(t) is shrinking on both sides until the neighboring intervals Li±1(t) meet at a mid-point between their initial closest endpoints pi−1 + ri−1 and pi+1 − ri+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' This meeting time is t = 1 2(pi+1 −ri+1 −pi−1 −ri−1) = 1 2(gi +2ri +gi+1), which is also illustrated by Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' So the trapezoid function ηi has the corners (0, 2ri), �gi 2 , 2ri + g � , �gi+1 2 , 2ri + g � , �gi + gi+1 2 + ri, 0 � as expected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' □ Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='3 applies Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 to get ψ1 found for the periodic sequence S in Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='3 (using Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 for ψ1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The sequence S = � 0, 1 3, 1 2 � + Z in Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 with points p1 = 0, p2 = 1 3, p3 = 1 2 of radii r1 = 1 12, r2 = 0, r3 = 1 12, respectively, has the initial gaps between successive intervals g1 = 1 3, g2 = 1 4, g3 = 1 12, see all the computations in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Case (R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' In Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 for the trapezoid func- tion ηR = η1 measuring the fractional length covered only by the red interval, we set k = 1 and i = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then ri = 1 12, gi = 1 3 and gi+1 = 1 4, so gi + gi+1 2 + ri = 1 2 �1 3 + 1 4 � + 1 12 = 3 8, g = min{gi, gi+1} = 1 4, g + 2ri = 1 4 + 2 12 = 5 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then ηR = η1 has the following corner points: (0, 2ri) = � 0, 1 6 � , �gi 2 , g + 2ri � = �1 6, 5 12 � , �gi+1 2 , g + 2ri � = �1 8, 5 12 � , �gi + gi+1 2 + ri, 0 � = �3 8, 0 � , where the two middle corners are accidentally swapped due to gi > gi+1 but they define the same trapezoid function as in the first picture of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Case (G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' In Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 for the trapezoid func- tion ηG = η2 measuring the fractional length covered only by the green interval, we set k = 1 and i = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then ri = 0, gi = 1 4 and gi+1 = 1 12, so gi + gi+1 2 + ri = 1 2 �1 4 + 1 12 � + 0 = 1 6, g = min{gi, gi+1} = 1 12, g + 2ri = 1 12 + 0 = 1 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then ηG = η2 has the following corner points exactly as shown in the second picture of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 5: (0, 2ri) = (0, 0) , �gi 2 , g + 2ri � = �1 8, 1 12 � , �gi+1 2 , g + 2ri � = � 1 24, 5 12 � , �gi + gi+1 2 + ri, 0 � = �1 6, 0 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Case (B).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' In Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 for the trapezoid func- tion ηB = η3 measuring the fractional length covered only by the blue interval, we set k = 1 and Springer Nature 2021 LATEX template Density functions of periodic sequences 9 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 6 The distances g, s, g′ between line intervals used in the proofs of Theorems 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, shown here for k = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' i = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then ri = 1 12, gi = 1 12 and gi+1 = 1 3, so gi + gi+1 2 + ri = 1 2 � 1 12 + 1 3 � + 1 12 = 7 24, g = min{gi, gi+1} = 1 12, g + 2ri = 1 12 + 2 12 = 1 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then ηB = η3 has the following corner points: (0, 2ri) = � 0, 1 6 � , �gi 2 , g + 2ri � = � 1 24, 1 4 � , �gi+1 2 , g + 2ri � = �1 6, 1 4 � , �gi + gi+1 2 + ri, 0 � = � 7 24, 0 � exactly as shown in the third picture of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ 5 Higher density functions ψk This section proves Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 describing the k- th density function ψk[S](t) for any k ≥ 2 and a periodic sequence S of disjoint intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' To prepare the proof of Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, Exam- ple 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 computes ψ2[S] for S from Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 (ψ2 for S = � 0, 1 3, 1 2 � + Z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The density ψ2(t) can be found as the sum of the trape- zoid functions ηGB, ηBR, ηRG, each measuring the length of a double intersection, see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For the green interval [1 3 −t, 1 3 +t] and the blue interval [ 5 12 − t, 7 12 + t], the graph of the function ηGB(t) is piecewise linear and starts at the point ( 1 24, 0) because these intervals touch at t = 1 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The green-blue intersection [ 5 12 −t, 1 3 +t] grows until t = 1 6, when the resulting interval [1 4, 1 2] touches the red interval on the left.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' At the same time, the graph of ηGB(t) is linearly growing (with gradient 2) to the corner (1 6, 1 4), see Fig, 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For t ∈ [1 6, 7 24], the green-blue intersection interval becomes shorter on the left, but grows at the same rate on the right until t = 7 24 when [1 8, 5 8] touches the red interval [5 8, 1] on the right, see the 5th picture in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' So the graph of ηGB(t) remains constant up to the point ( 7 24, 1 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For t ∈ [ 7 24, 5 12] the green-blue intersection interval is shortening from both sides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' So the graph of ηGB(t) linearly decreases (with gradient −2) and returns to the t-axis at the corner ( 5 12, 0), then remains constant ηGB(t) = 0 for t ≥ 5 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 7 shows all trapezoid functions for double intersections and ψ2 = ηGB + ηBR + ηRG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 (description of ψk for k ≥ 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Let a periodic sequence S = {p1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , pm} + Z consist of disjoint intervals with centers 0 ≤ p1 < · · · < pm < 1 and radii r1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , rm ≥ 0, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Consider the gaps gi = (pi − ri) − (pi−1 + ri−1) between the successive intervals of S, where i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , m and p0 = pm − 1, r0 = rm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For k ≥ 2, the density function ψk(t) equals the sum of m trapezoid functions ηk,i(t), i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , m, each having the following corner points: �s 2, 0 � , �g + s 2 , g � , �s + g′ 2 , g � , �g + s + g′ 2 , 0 � , +K P i+k-1 1+1 distance distance CSpringer Nature 2021 LATEX template 10 Density functions of periodic sequences Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 7 The trapezoid functions ηGB, ηBR, ηRG and the 2nd density function ψ2(t) for the 1-period sequence S whose points 0, 1 3 , 1 2 have radii 1 12 , 0, 1 12 , see Example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' where g, g′ are the minimum and maximum values in the pair {gi + 2ri, gi+k + 2ri+k−1}, and s = i+k−1 � j=i+1 gj + 2 i+k−2 � j=i+1 rj, so s = gi+1 for k = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Hence ψk(t) is determined by the unordered set of the ordered tuples (g, s, g′), i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ Proof The k-th density function ψk(t) measures the total fractional length of k-fold intersections among m intervals Li(t) = [pi − ri − t, pi + ri + t], i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Now we visualize all such intervals Li(t) in the line R without mapping them modulo 1 to the unit cell [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Since all radii ri ≥ 0, only k successive inter- vals can contribute to k-fold intersections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' So a k-fold intersection of growing intervals emerges only when two intervals Li(t) and Li+k−1(t) overlap because their intersection should be also covered by all the intermediate intervals Li(t), Li+1(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , Li+k−1(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then the density ψk(t) equals the sum of the m trapezoid functions ηk,i, i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , m, each equal to the length of the k-fold intersection ∩i+k−1 j=i Lj(t) not covered by other intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then ηk,i(t) remains 0 until the first critical moment t when 2t equals the distance between the points pi + ri and pi+k−1 − ri+k−1 in R, see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 6, so 2t = i+k−1 � j=i+1 gj + 2 i+k−2 � j=i+1 rj = s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Hence t = s 2 and ( s 2, 0) is the first corner point of ηk,i(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' At t = s 2, the interval of the k-fold intersection ∩i+k−1 j=i Lj(t) starts expanding on both sides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Hence ηk,i(t) starts increasing (with gradient 2) until the k-fold intersection touches one of the neighboring intervals Li−1(t) or Li+k(t) on the left or on the right.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The left interval Li−1(t) touches the k-fold inter- section ∩i+k−1 j=i Lj(t) when 2t equals the distance from pi−1 + ri−1 (the right endpoint of Li−1) to pi+k−1 − ri+k−1 (the left endpoint of Li+k−1), see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 6, so 2t = i+k−1 � j=i gj + 2 i+k−2 � j=i rj = gi + 2ri + s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The right interval Li+k−1(t′) touches the k-fold intersection ∩i+k−1 j=i Lj(t′) when 2t′ equals the distance from pi + ri (the right endpoint of Li) to pi+k − ri+k (the left endpoint of Li+k), see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 6, so 2t′ = i+k � j=i+1 gj + 2 i+k−1 � j=i+1 rj = s + gi+k + 2ri+k−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' If (say) gi + 2ri = g < g′ = gi+k + 2ri+k−1, the k-fold intersection ∩i+k−1 j=i Lj(t) first touches Li−1 at the earlier moment t before reaching Li+k(t′) at the later moment t′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' At the earlier moment, ηk,i(t) equals 2(t − s 2) = gi + 2ri = g and has the corner (g + s 2 , g).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' After that, the k-fold intersection is shrinking on the left and is expanding at the same rate on the right.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' So the function ηk,i(t) = g remains constant until the k-fold intersection touches the right interval Li+k(t′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' GB BR RGSpringer Nature 2021 LATEX template Density functions of periodic sequences 11 At this later moment t′ = s + gi+k 2 + ri+k−1 = g′, ηk,i(t′) still equals g and has the corner (s + g′ 2 , g).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' If gi + 2ri = g′ > g = gi+k + 2ri+k−1, the grow- ing intervals Li−1(t) and Li+k−1(t) touch the k-fold intersection ∩i+k−1 j=i Lj(t) in the opposite order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' How- ever, the above arguments lead to the same corners (g + s 2 , g) and (s + g′ 2 , g) of ηk,i(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' If g = g′, the two corners collapse to one corner in the graph of ηk,i(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The k-fold intersection ∩i+k−1 j=i Lj(t) becomes fully covered when the intervals Li−1(t), Li+k(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' At this moment, 2t equals the distance from pi−1 + ri−1 (the right endpoint of Li−1) to pi+k − ri+k (the left end- point of Li+k), see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 6, so 2t = i+k � j=i gj +2 i+k−1 � j=i rj = gi + 2ri + s + gi+k + 2ri+k−1 = g + s + g′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The graph of ηk,i(t) has the final corner �g + s + g′ 2 , 0 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' □ Example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='3 applies Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 to get ψ2 found for the periodic sequence S in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='3 (using Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 for ψ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The sequence S = � 0, 1 3, 1 2 � + Z in Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 with points p1 = 0, p2 = 1 3, p3 = 1 2 of radii r1 = 1 12, r2 = 0, r3 = 1 12, respectively, has the initial gaps g1 = 1 3, g2 = 1 4, g3 = 1 12, see Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' In Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, the 2nd density function ψ2[S](t) is expressed as a sum of the trapezoid functions computed via their corners below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Case (GB).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For the function ηGB measuring the double intersections of the green and blue intervals centered at p2 = pi and p3 = pi+k−1, we set k = 2 and i = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then we have the radii ri = 0 and ri+1 = 1 12, the gaps gi = 1 4, gi+1 = 1 12, gi+2 = 1 3, and the sum s = gi+1 = 1 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The pair {gi + 2ri, gi+2 + 2ri+1} = �1 4 + 0, 1 3 + 2 12 � has the minimum value g = 1 4 and maximum value g′ = 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then η2,2[S](t) = ηGB has the following corners as expected in the top picture of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 7: �s 2, 0 � = � 1 24, 0 � , �g + s 2 , g � = �1 2 �1 4 + 1 12 � , 1 4 � = �1 6, 1 4 � , �s + g′ 2 , g � = �1 2 � 1 12 + 1 2 � , 1 4 � = � 7 24, 1 4 � , �g + s + g′ 2 , 0 � = �1 2(1 4 + 1 12 + 1 2), 0 � = � 5 12, 0 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Case (BR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For the trapezoid function ηBR mea- suring the double intersections of the blue and red intervals centered at p3 = pi and p1 = pi+k−1, we set k = 2 and i = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then we have the radii ri = 1 12 = ri+1, the gaps gi = 1 12, gi+1 = 1 3, gi+2 = 1 4, and s = gi+1 = 1 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The pair {gi + 2ri, gi+2 + 2ri+1} = � 1 12 + 2 12, 1 4 + 2 12 � has the minimum g = 1 4 and maximum g′ = 5 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then η2,3[S](t) = ηBR has the following corners as expected in the second picture of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 7: �s 2, 0 � = �1 6, 0 � , �g + s 2 , g � = �1 2 �1 4 + 1 3 � , 1 4 � = � 7 24, 1 4 � , �s + g′ 2 , g � = �1 2 �1 3 + 5 12 � , 1 4 � = �3 8, 1 4 � , �g + s + g′ 2 , 0 � = �1 2(1 4 + 1 3 + 5 12), 0 � = �1 2, 0 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Case (RG).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For the trapezoid function ηRG mea- suring the double intersections of the red and green intervals centered at p1 = pi and p2 = pi+k−1, we set k = 2 and i = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then we have the radii ri = 1 12 and ri+1 = 0, the gaps gi = 1 3, Springer Nature 2021 LATEX template 12 Density functions of periodic sequences gi+1 = 1 4, gi+2 = 1 12, and s = gi+1 = 1 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The pair {gi + 2ri, gi+2 + 2ri+1} = �1 3 + 2 12, 1 12 + 0 � has the minimum g = 1 12 and maximum g′ = 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then η2,1[S](t) = ηRG has the following corners: �s 2, 0 � = �1 8, 0 � , �g + s 2 , g � = �1 2 � 1 12 + 1 4 � , 1 12 � = �1 6, 1 12 � , �s + g′ 2 , g � = �1 2 �1 4 + 1 2 � , 1 12 � = �3 8, 1 12 � , �g + s + g′ 2 , 0 � = �1 2( 1 12 + 1 4 + 1 2), 0 � = � 5 12, 0 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' as expected in the third picture of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ 6 Properties of new densities This section proves the periodicity of the sequence ψk with respect to the index k ≥ 0 in Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, which was a bit unexpected from original Defini- tion 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' We start with the simpler example for the familiar 3-point sequence in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Example 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 (periodicity of ψk in the index k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Let the periodic sequence S = � 0, 1 3, 1 2 � +Z have three points p1 = 0, p2 = 1 3, p3 = 1 2 of radii r1 = 1 12, r2 = 0, r3 = 1 12, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The ini- tial intervals L1(0) = [− 1 12, 1 12], L2(0) = [ 1 3, 1 3], L3(0) = [ 5 12, 7 12] have the 0-fold intersection mea- sured by ψ0(0) = 2 3 and the 1-fold intersection measured by ψ1(0) = 1 3, see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 4 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' By the time t = 1 2 the initial intervals will grow to L1( 1 2) = [− 7 12, 7 12], L2( 1 2) = [− 1 6, 5 6], L3( 1 2) = [− 1 12, 13 12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The grown intervals at the radius t = 1 2 have the 3-fold intersection [− 1 12, 7 12] of the length ψ3( 1 2) = 2 3, which coincides with ψ0(0) = 2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' With the extra interval L4( 1 2) = [ 5 12, 19 12] cen- tered at p4 = 1, the 4-fold intersection is L1 ∩ L2 ∩ L3 ∩ L4 = [ 5 12, 7 12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' With the extra inter- val L5( 1 2) = [ 5 6, 11 6 ] centered at p5 = 4 3, the 4-fold intersection L2 ∩ L3 ∩ L4 ∩ L5 is the single point 5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' With the extra interval L6( 1 2) = [ 11 12, 13 12] centered at p6 = 3 2, the 4-fold intersection is L3∩L4∩L5∩L6 = [ 11 12, 13 12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Hence the total length of the 4-fold intersection at t = 1 2 is ψ4( 1 2) = 1 3, which coincides with ψ1(0) = 1 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For the larger t = 1, the six grown intervals L1(1) = � −13 12, 13 12 � , L2(1) = � −2 3, 4 3 � , L3(1) = � − 7 12, 19 12 � , L4(1) = � − 1 12, 25 12 � , L5(1) = �1 3, 7 3 � , L6(1) = � 5 12, 31 12 � have the 6-fold intersection � 5 12, 13 12 � of length ψ6(1) = 2 3 coinciding with ψ0(0) = ψ3( 1 2) = 2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ Corollary 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 proves that the coincidences in Example 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 are not accidental.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The periodicity of ψk with respect to k is illustrated by Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 (periodicity of ψk in the index k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The density functions ψk[S] of a periodic sequence S = {p1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , pm} + Z consist of disjoint intervals with centers 0 ≤ p1 < · · · < pm < 1 and radii r1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , rm ≥ 0, respectively, satisfy the periodicity ψk+m(t + 1 2) = ψk(t) for any k ≥ 0 and t ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ Proof Since the initial intervals are disjoint, for k ≥ 0, any (k +m)-fold intersection involves k +m successive intervals Li(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , Li+k+m−1(t) centered around the points of S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Then we can find an interval [x, x + 1] covering exactly m of these initial intervals of S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' By collapsing [x, x+1] to the point x, any (k+m)- fold intersection of k + m intervals grown by a radius r ≥ 1 2 becomes a k-fold intersection of k intervals Springer Nature 2021 LATEX template Density functions of periodic sequences 13 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 8 The densities ψk, k = 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , 9 for the 1-period sequence S whose points 0, 1 3 , 1 2 have radii 1 12 , 0, 1 12 , respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The densities ψ0, ψ1, ψ2 are described in Examples 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1, 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 and determine all other densities by periodicity in Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' grown by t = r− 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Both k-fold and (k+m)-fold inter- sections within any unit cell have the same fractional length, so ψk+m(t + 1 2) = ψk(t) for any t ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' □ The symmetry ψm−k( 1 2 − t) = ψk(t) for k = 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , [ m 2 ], and t ∈ [0, 1 2] from [3, Theorem 8] no longer holds for points with different radii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For example, ψ1(t) ̸= ψ2( 1 2 − t) for the periodic sequence S = � 0, 1 3, 1 2 � + Z, see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 5, 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' If all points have the same radius r, [3, Theorem 8] implies the symmetry after replacing t by t + 2r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The main results of [3] implied that all den- sity functions cannot distinguish the non-isometric sequences S15 = {0, 1, 3, 4, 5, 7, 9, 10, 12} + 15Z and Q15 = {0, 1, 3, 4, 6, 8, 9, 12, 14}+15Z of points with zero radii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Example 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='3 shows that the den- sities for sequences with non-zero radii are strictly stronger and distinguish the sequences S15 ̸∼= Q15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Example 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='3 (ψk for S15, Q15 with neighbor radii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For any point p in a periodic sequence S ⊂ R, define its neighbor radius as the half-distance to a closest neighbor of p within the sequence S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' This choice of radii respects the isometry in the sense that periodic sequences S, Q with zero-sized radii are isometric if and only if S, Q with neighbor radii are isometric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 9 shows that the densi- ties ψk for k ≥ 2 distinguish the non-isometric sequences S15 and Q15 scaled down by factor 15 to the unit cell [0, 1], see Example 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ Corollary 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='4 (computation of ψk(t)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Let S, Q ⊂ R be periodic sequences with at most m motif points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For k ≥ 1, one can draw the graph of the k-th density function ψk[S] in time O(m2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' One can check in time O(m3) if Ψ[S] = Ψ[Q].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ Proof To draw the graph of ψk[S] or evaluate the k- th density function ψk[S](t) at any radius t, we first use the periodicity from Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 to reduce k to the range 0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' In time O(m log m) we put the points from a unit cell U (scaled to [0, 1] for conve- nience) in the increasing (cyclic) order p1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , pm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' In time O(m) we compute the gaps gi = (pi−ri)−(pi−1+ ri−1) between successive intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For k = 0, we put the gaps in the increasing order g[1] ≤ · · · ≤ g[m] in time O(m log m).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' By Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 in time O(m2), we write down the O(m) corner points whose horizontal coordinates are the critical radii where ψ0(t) can change its gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' We evaluate ψ0 at every critical radius t by sum- ming up the values of m trapezoid functions at t, which needs O(m2) time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' It remains to plot the points at all 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='8 psi 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='6 psi_1 psi_2 psi_3 psi_4 s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='4 - p psi_5 psi_6 psi_7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 psi_8 psi_9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='5 TSpringer Nature 2021 LATEX template 14 Density functions of periodic sequences Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 9 The densities ψk, k = 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , 10, distinguish (already for k ≥ 2) the sequences (scaled down by period 15) S15 = {0, 1, 3, 4, 5, 7, 9, 10, 12} + 15Z (top) and Q15 = {0, 1, 3, 4, 6, 8, 9, 12, 14} + 15Z (bottom), where the radius ri of any point is the half-distance to its closest neighbor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' These sequences with zero radii have identical ψk for all k, see [3, Example 10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' O(m) critical radii t and connect the successive points by straight lines, so the total time is O(m2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' For any larger fixed index k = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , m, in time O(m2) we write down all O(m) corner points from Theorems 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, which leads to the graph of ψk(t) similarly to the above argument for k = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' To decide if the infinite sequences of density func- tions coincide: Ψ[S] = Ψ[Q], by Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 it suffices to check only if O(m) density functions coincide: ψk[S](t) = ψk[Q](t) for k = 0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , [ m 2 ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' To check if two piecewise linear functions coincide, it remains to compare their values at all O(m) critical radii t from the corner points in Theorems 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Since these values were found in time O(m2) above, the total time for k = 0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' , [ m 2 ] is O(m3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' □ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='75 psi_0 psi_1 psi_2 psi_3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='50 K psi_4 psi psi_5 psi_6 psi_7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='25 psi_8 psi_ 9 psi_10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='6 T0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='75 psi_ 0 psi_1 psi_2 psi_3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='50 K psi_4 psi_5 psi_6 psi_7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='25 psi_8 psi_ 9 psi_10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='6 TSpringer Nature 2021 LATEX template Density functions of periodic sequences 15 All previous examples show densities with a single local maximum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' However, the new R code [5] helped us discover the opposite examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 10 For the periodic sequence S = � 0, 1 8 , 1 4 , 3 4 � + Z whose all points have radii 0, the 2nd density ψ2[S](t) has the local minimum at t = 1 4 between two local maxima.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Example 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='5 (densities with multiple maxima).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 10 shows a simple 4-point sequence S whose 2nd density ψ2[S] has two local maxima.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 11 and 12 show more complicated sequences whose density functions have more than two maxima.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' ■ Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 11 For the sequence S = � 0, 1 81 , 1 27 , 1 9 , 1 3 � +Z whose all points have radii 0, ψ2[S] equal to the sum of the shown five trapezoid functions has three maxima.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 12 For the sequence S = � 0, 1 64 , 1 16 , 1 8 , 1 4 , 3 4 � + Z whose all points have radii 0, ψ3[S] has 5 local maxima.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 7 Conclusions and future work In comparison with the past work [3], the key contributions of this paper are the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 extends density functions ψk to any periodic sets of points with radii ri ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Theorems 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 explicitly describe all ψk for any periodic sequence S of points with radii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The descriptions of ψk allowed us to justify the periodicity of ψk in Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 and a quadratic algorithm computing any ψk in Corollary 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' The code [5] helped us distinguish S15 ̸∼= Q15 in Example 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='3 and find sequences whose densities have multiple local maxima in Example 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Here are the open problems for future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Verify if density functions ψk[S](t) for small values of k distinguish all non-isometric periodic point sets S ⊂ Rn at least with radii 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Characterize the periodic sequences S ⊂ R whose all density functions ψk for k ≥ 1 have a unique local maximum, not as in Example 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Similar to Theorems 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2, analytically describe the density function ψk[S] for periodic point sets S ⊂ Rn in higher dimensions n > 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' This research was supported by the grants of the UK Engineering Physical Sciences Research 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='75 psi_0 K 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='50 psi_1 Isd psi_2 psi_3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='5 t0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='08 eta_2_1 eta_2_2 eta_2_3 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 eta_2_4 eta_2_5 psi_2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='04 - 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='00 - 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='5 t0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='09- eta_3_1 eta_3_2 3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='06 eta_3_3 eta eta_3_4 3 eta_3_5 eta_3_6 eta_3_7 psi_3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='03 - 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='00- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='4 tSpringer Nature 2021 LATEX template 16 Density functions of periodic sequences Council (EP/R018472/1, EP/X018474/1) and the Royal Academy of Engineering Industrial Fellow- ship (IF2122/186) of the last author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' We thank all reviewers for their time and helpful advice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' References [1] Anosova, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Kurlin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': Introduction to periodic geometry and topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' arxiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='02749 (2021) [2] Anosova, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Kurlin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': An isometry clas- sification of periodic point sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' In: Lecture Notes in Computer Science (Proceedings of DGMM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 12708, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 229–241 (2021) [3] Anosova, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Kurlin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': Density functions of periodic sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' In: Lecture Notes in Com- puter Science (Proceedings of DGMM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 13493, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 395–408 (2022) [4] Anosova, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Kurlin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': Recognition of near- duplicate periodic patterns in polynomial time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' arxiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='15298 (2022) [5] Anosova, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': R code for density functions of periodic sequences (2023), https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' com/oanosova/DensityFunctions1D [6] Bright, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Cooper, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Kurlin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': Wel- come to a continuous world of 3-dimensional lattices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' arxiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='11538 (2021) [7] Bright, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Cooper, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Kurlin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': Geographic-style maps for 2-dimensional lat- tices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Acta Crystallographica Section A 79(1), 1–13 (2023) [8] Edelsbrunner, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Heiss, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Kurlin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Smith, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Wintraecken, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': The density fin- gerprint of a periodic point set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' In: SoCG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 189, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 32:1–32:16 (2021) [9] Gr¨unbaum, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Moore, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': The use of higher- order invariants in the determination of generalized patterson cyclotomic sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Acta Cryst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' A 51, 310–323 (1995) [10] Kurlin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': A complete isometry classification of 3D lattices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' arxiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='10543 (2022) [11] Kurlin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': Computable complete invari- ants for finite clouds of unlabeled points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' arxiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='08502 (2022), http://kurlin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='org/ projects/complete-isometry-invariants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='pdf [12] Kurlin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': Exactly computable and contin- uous metrics on isometry classes of finite and 1-periodic sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='04388 (2022), http://kurlin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='org/projects/ periodic-geometry-topology/metric1D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='pdf [13] Kurlin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': Mathematics of 2-dimensional lat- tices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Foundations of Computational Math- ematics (2022), http://kurlin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='org/projects/ lattice-geometry/lattices2Dmaths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='pdf [14] Mosca, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Kurlin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': Voronoi-based sim- ilarity distances between arbitrary crystal lattices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Crystal Research and Technology 55(5), 1900197 (2020) [15] Pozdnyakov, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' : Incompleteness of atomic structure representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Let.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 125, 166001 (2020) [16] Smith, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Kurlin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': A practical algo- rithm for degree-k Voronoi domains of three- dimensional periodic point sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' In: Lecture Notes in Computer Science (Proceedings of ISVC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 13599, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' 377–391 (2022) [17] Smith, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Kurlin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': Families of point sets with identical 1D persistence,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' arxiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='00577 (2022), http://kurlin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' org/projects/periodic-geometry-topology/ trivial-persistence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='pdf [18] Widdowson, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Kurlin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': Pointwise distance distributions of periodic sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' arXiv:2108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='04798 (version 1) (2021) [19] Widdowson, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', Kurlin, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=': Resolving the data ambiguity for periodic crystals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Advances in Neural Information Process- ing Systems (arXiv:2108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='04798, v2) 35 (2022), http://kurlin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='org/projects/periodic+ geometry/NeurIPS2022PDD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content='pdf [20] Widdowson, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' : Average minimum dis- tances of periodic point sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' MATCH Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Comp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} +page_content=' Chemistry 87, 529–559 (2022)' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE4T4oBgHgl3EQfiw29/content/2301.05137v1.pdf'} diff --git a/MdE1T4oBgHgl3EQftQUl/content/2301.03374v1.pdf b/MdE1T4oBgHgl3EQftQUl/content/2301.03374v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b9b49f5867cb29417fd1a5b6f39852e34f02f730 --- /dev/null +++ b/MdE1T4oBgHgl3EQftQUl/content/2301.03374v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc0c51773605073b29f17ccb50987dfb951d1560bcc4e9776b3c30a6806a4a5c +size 8692727 diff --git a/OtAyT4oBgHgl3EQfUfen/vector_store/index.faiss b/OtAyT4oBgHgl3EQfUfen/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..9e5ec0f14d65aea8aeeae5e84f86b1597cf847fd --- /dev/null +++ b/OtAyT4oBgHgl3EQfUfen/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac37ec030752deb69937cdae794be524b673730286f54de13f8e5fb734a9a796 +size 3276845 diff --git a/OtAyT4oBgHgl3EQfUfen/vector_store/index.pkl b/OtAyT4oBgHgl3EQfUfen/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..bcd06a708c6ff61f23ac04dfa517c69ea6bc188d --- /dev/null +++ b/OtAyT4oBgHgl3EQfUfen/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e80552674d606850c456ed200b9a6f3e9357bda630ffa7365fa2057a43939d1 +size 120390 diff --git a/PdE0T4oBgHgl3EQfkAEs/content/2301.02466v1.pdf b/PdE0T4oBgHgl3EQfkAEs/content/2301.02466v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ab720c8465446b7285baa1c0144415396bebd86c --- /dev/null +++ b/PdE0T4oBgHgl3EQfkAEs/content/2301.02466v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de288ecf10fb888fdb10618959b96999da403d8696e47ec0725b2fb388a38307 +size 3971536 diff --git a/PdE0T4oBgHgl3EQfkAEs/vector_store/index.pkl b/PdE0T4oBgHgl3EQfkAEs/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..7987e43dbff93b5c63360a3863fdc9e8e6557c5d --- /dev/null +++ b/PdE0T4oBgHgl3EQfkAEs/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4266c243207217c1333dc9bfa1ad90865ee43030a73d7c4ca37d66294e855026 +size 139122 diff --git a/PtAyT4oBgHgl3EQf7fqt/vector_store/index.pkl b/PtAyT4oBgHgl3EQf7fqt/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..d17de98d942a51cf1658a94acd50b843ce291a7c --- /dev/null +++ b/PtAyT4oBgHgl3EQf7fqt/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05f9fdfb61d46533f74d817bdd89532d48cc80026270358847cd434095918ccb +size 927998 diff --git a/QNFJT4oBgHgl3EQf2i1I/vector_store/index.pkl b/QNFJT4oBgHgl3EQf2i1I/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..b6bd565a2660ea0efaf3d4beeccd326a40375f10 --- /dev/null +++ b/QNFJT4oBgHgl3EQf2i1I/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc34add24b7d56a8a21acae64aa0eea37d346480b1f6e1f3c457fe8a148843ef +size 188787 diff --git a/QNFRT4oBgHgl3EQfJje8/content/tmp_files/2301.13496v1.pdf.txt b/QNFRT4oBgHgl3EQfJje8/content/tmp_files/2301.13496v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..35adb6c151c763bcb939d0962049af750af8ef48 --- /dev/null +++ b/QNFRT4oBgHgl3EQfJje8/content/tmp_files/2301.13496v1.pdf.txt @@ -0,0 +1,1447 @@ +arXiv:2301.13496v1 [math.AP] 31 Jan 2023 +Conditional regularity for the Navier–Stokes–Fourier system +with Dirichlet boundary conditions +Danica Basari´c ∗ +Eduard Feireisl ∗ +Hana Mizerov´a ∗,† +∗ Institute of Mathematics of the Czech Academy of Sciences +ˇZitn´a 25, CZ-115 67 Praha 1, Czech Republic +† Department of Mathematical Analysis and Numerical Mathematics, Comenius University +Mlynsk´a dolina, 842 48 Bratislava, Slovakia +Abstract +We consider the Navier–Stokes–Fourier system with the inhomogeneous boundary condi- +tions for the velocity and the temperature. We show that solutions emanating from sufficiently +regular data remain regular as long as the density ̺, the absolute temperature ϑ, and the +modulus of the fluid velocity |u| remain bounded. +Keywords: Navier–Stokes–Fourier system, conditional regularity, blow–up criterion, regular +solution +1 +Introduction +Standard systems of equations in fluid mechanics including the Navier–Stokes–Fourier system +governing the motion of a compressible, viscous, and heat conducting fluid are well posed in the +class of strong solutions on a possibly short time interval [0, Tmax). The recent results of Merle at +al. [16], [17] strongly indicate that Tmax may be finite, at least in the idealized case of “isentropic” +viscous flow. Conditional regularity results guarantee that a blow up will not occur as soon as +some lower order norms of solutions are controlled. +We consider the Navier–Stokes–Fourier system governing the time evolution of the mass density +̺ = ̺(t, x), the (absolute) temperature ϑ = ϑ(t, x), and the velocity u = u(t, x) of a compressible, +viscous, and heat conducting fluid: +∗The work of D.B., E.F., and H.M. was supported by the Czech Sciences Foundation (GAˇCR), Grant Agreement +21–02411S. The Institute of Mathematics of the Czech Academy of Sciences is supported by RVO:67985840. +1 + +∂t̺ + divx(̺u) = 0, +(1.1) +∂t(̺u) + divx(̺u ⊗ u) + ∇xp(̺, ϑ) = divxS(Dxu) + ̺f, Dxu = 1 +2 +� +∇xu + ∇t +xu +� +, +(1.2) +∂t(̺e(̺, ϑ)) + divx(̺e(̺, ϑ)u) + divxq(∇xϑ) = S(Dxu) : Dxu − p(̺, ϑ)divxu. +(1.3) +The fluid is Newtonian, the viscous stress S is given by Newton’s rheological law +S(Dxu) = 2µ +� +Dxu − 1 +3divxuI +� ++ ηdivxuI, µ > 0, η ≥ 0. +(1.4) +The heat flux obeys Fourier’s law +q(∇xϑ) = −κ∇xϑ, κ > 0. +(1.5) +The equation of state for the pressure p and the internal energy e is given by the standard Boyle– +Mariotte law of perfect gas, +p(̺, ϑ) = ̺ϑ, e(̺, ϑ) = cvϑ, cv > 0. +(1.6) +For the sake of simplicity, we suppose that the viscosity coefficients µ, η, the heat conductivity +coefficient κ as well as the specific heat at constant volume cv are constant. +There is a large number of recent results concerning conditional regularity for the Navier– +Stokes–Fourier system in terms of various norms. Fan, Jiang, and Ou [4] consider a bounded fluid +domain Ω ⊂ R3 with the conservative boundary conditions +u|∂Ω = 0, ∇xϑ · n|∂Ω = 0. +(1.7) +The same problem is studied by Sun, Wang, and Zhang [19] and later by Huang, Li, Wang [14]. +There are results for the Cauchy problem Ω = R3 by Huang and Li [13], and Jiu, Wang and Ye +[15]. Possibly the best result so far has been established in [11], where the blow up criterion for +both the Cauchy problem and the boundary value problem (1.7) is formulated in terms of the +maximum of the density and a Serrin type regularity for the temperature: +lim sup +t→Tmax− +� +∥̺(t, ·)∥L∞ + ∥ϑ − ϑ∞∥Ls(0,t)(Lr) +� += ∞, 3 +2 < r ≤ ∞, 1 ≤ s ≤ ∞, 2 +s + 3 +r ≤ 2, +where ϑ∞ denotes the far field temperature in the Cauchy problem, cf. also the previous results +by Wen and Zhu [23], [24]. +Much less is known in the case of the Dirichlet boundary conditions +u|∂Ω = uB, ϑ|∂Ω = ϑB. +(1.8) +2 + +Fan, Zhi, and Zhang [5] showed that a strong solution of the Navier–Stokes–Fourier system remains +regular up to a time T > 0 if (i) Ω ⊂ R2 is a bounded domain, (ii) uB = 0, ϑB = 0, and (iii) +lim sup +t→T− +(∥̺∥L∞ + ∥ϑ∥L∞) < ∞. +(1.9) +All results mentioned above describe fluids in a conservative regime, meaning solutions are +close to equilibrium in the long run. However, many real world applications concern fluids out of +equilibrium driven by possibly large driving forces f and/or inhomogeneous boundary conditions. +The iconic examples are the Rayleigh–B´enard and Taylor–Couette flows where the fluid is driven +to a turbulent regime by a large temperature gradient and large boundary velocity, respectively, +see Davidson [3]. +Motivated by these physically relevant examples, we consider a fluid confined to a bounded +domain Ω ⊂ R3 with impermeable boundary, where the temperature and the (tangential) velocity +are given on ∂Ω, +ϑ|∂Ω = ϑB, ϑB = ϑB(x), ϑB > 0 on ∂Ω, +(1.10) +u|∂Ω = uB, uB = uB(x), uB · n = 0 on ∂Ω. +(1.11) +The initial state of the fluid is prescribed: +̺(0, ·) = ̺0, ̺0 > 0 in Ω, ϑ(0, ·) = ϑ0, ϑ0 > 0 in Ω, u(0, ·) = u0. +(1.12) +The initial and boundary data are supposed to satisfy suitable compatibility conditions specified +below. +The existence of local in time strong solutions for the problem (1.1)–(1.6), endowed with the +inhomogeneous boundary conditions (1.10), (1.11) was established by Valli [20], [21] , see also Valli +and Zajaczkowski [22]. The solution exists on a maximal time interval [0, Tmax), Tmax > 0. Our +goal is to show that if Tmax < ∞, then necessarily +lim sup +t→Tmax− +� +∥̺(t, ·)∥L∞(Ω) + ∥ϑ(t, ·)∥L∞(Ω) + ∥u(t, ·)∥L∞(Ω;R3) +� += ∞. +(1.13) +The proof is based on deriving suitable a priori bounds assuming boundedness of all norms involved +in (1.13) as well as the norm of the initial/boundary data in a suitable function space. Although +approach shares some similarity with Fang, Zi, and Zhang [5], essential modifications must be +made to accommodate the inhomogeneous boundary data as well as the driving force f. +The +importance of conditional regularity results in numerical analysis of flows with uncertain initial +data was discussed recently in [7]. +3 + +The paper is organized as follows. In Section 2, we introduce the class of strong solutions to the +Navier–Stokes–Fourier system and state our main result concerning conditional regularity. The +remaining part of the paper is devoted to the proof of the main result – deriving suitable a priori +bounds. In Section 3 we recall the standard energy estimates that hold even in the class of weak +solutions. Section 4 is the heart of the paper. We establish the necessary estimates on the velocity +gradient by means of the celebrated Gagliardo–Nirenberg interpolation inequality. In Section 5, +higher order estimates on the velocity gradient are derived, and, finally, the estimates are closed +by proving bounds on the temperature time derivative in Section 6. This last part borrows the +main ideas from [9]. +2 +Strong solutions, main result +We start the analysis by recalling the concept of strong solution introduced by Valli [21]. Similarly +to the boundary data uB, ϑB we suppose that the driving force f = f(x) is independent of time, +meaning we deal with an autonomous problem. Following [21], we suppose that Ω ⊂ R3 is a +bounded domain with ∂Ω of class C4. +We assume the data belong to the following class: +̺0 ∈ W 3,2(Ω), 0 < ̺0 ≤ min +x∈Ω ̺0(x), +ϑ0 ∈ W 3,2(Ω), 0 < ϑ0 ≤ min +x∈Ω ϑ0(x), +u0 ∈ W 3,2(Ω; R3), +ϑB ∈ W +7 +2(∂Ω), 0 < ϑB ≤ min +x∈∂Ω ϑB(x), +uB ∈ W +7 +2(∂Ω; R3), uB · n = 0, +f ∈ W 2,2(Ω; R3). +(2.1) +In addition, the data must satisfy the compatibility conditions +ϑ0 = ϑB, u0 = uB on ∂Ω, +̺0u0 · ∇xu0 + ∇xp(̺0, ϑ0) = divxS(Dxu0) + ̺0f on ∂Ω, +̺0u0 · ∇xϑ0 + divxq(ϑ0) = S(Dxu0) : Dxu0 − p(̺0, ϑ0)divxu0 on ∂Ω. +(2.2) +We set +D0 = max +� +∥(̺0, ϑ0, u0)∥W 3,2(Ω;R5), 1 +̺0 +, +1 +ϑ0 +, 1 +ϑB +, ∥ϑB∥W +7 +2 (∂Ω), ∥uB∥W +7 +2 (∂Ω;R3), ∥f∥W 2,2(Ω;R3) +� +. +(2.3) +4 + +2.1 +Local existence +The following result was proved by Valli [21, Theorem A] (see also [20]). +Theorem 2.1. (Local existence of strong solutions) Let Ω ⊂ R3 be a bounded domain of +class C4. Suppose that the data (̺0, ϑ0, u0), (ϑB, uB) and f belong to the class (2.1) and satisfy +the compatibility conditions (2.2). +Then there exists a maximal time Tmax > 0 such that the Navier–Stokes–Fourier system (1.1)– +(1.6), with the boundary conditions (1.10), (1.11), and the initial conditions (1.12) admits a solu- +tion (̺, ϑ, u) in [0, Tmax) × Ω unique in the class +̺, ϑ ∈ C([0, T]; W 3,2(Ω)), u ∈ C([0, T]; W 3,2(Ω; R3)), +ϑ ∈ L2(0, T; W 4,2(Ω)), u ∈ L2(0, T; W 4,2(Ω; R3)) +(2.4) +for any 0 < T < Tmax. The existence time Tmax is bounded below by a quantity c(D0) depending +solely on the norms of the data specified in (2.3). In particular, +lim +τ→Tmax− ∥(̺, ϑ, u)(τ, ·)∥W 3,2(Ω;R5) = ∞. +(2.5) +2.2 +Blow up criterion, conditional regularity +Our goal is to show the following result. +Theorem 2.2. (Blow up criterion) Under the hypotheses of Theorem 2.1, suppose that the +maximal existence time Tmax < ∞ is finite. +Then +lim sup +τ→Tmax− +∥(̺, ϑ, u)(τ, ·)∥L∞(Ω;R5) = ∞. +(2.6) +Theorem 2.2 is in the spirit of the blow up criteria for general parabolic systems – the solution +remains regular as long as it is bounded. Of course, our problem in question is of mixed hyperbolic– +parabolic type. +The proof of Theorem 2.2 follows from suitable a priori bounds applied on a compact time +interval. +Proposition 2.3. (Conditional regularity) +Under the hypotheses of Theorem 2.1, let (̺, ϑ, u) be the strong solution of the Navier–Stokes– +Fourier system belonging to the class (2.4) and satisfying +sup +(τ,x)∈[0,T)×Ω +̺(τ, x) ≤ ̺, +sup +(τ,x)∈[0,T)×Ω +ϑ(τ, x) ≤ ϑ, +sup +(τ,x)∈[0,T)×Ω +|u(τ, x)| ≤ u +(2.7) +5 + +for some T < Tmax. +Then there is a quantity c(T, D0, ̺, ϑ, u), bounded for bounded arguments, such that +sup +τ∈[0,T) +max +� +∥(̺, ϑ, u)(τ, ·)∥W 3,2(Ω;R5); sup +x∈Ω +1 +̺(τ, x); sup +x∈Ω +1 +ϑ(τ, x) +� +≤ c(T, D0, ̺, ϑ, u). +(2.8) +In view of Theorem 2.1, the conclusion of Theorem 2.2 follows from Proposition 2.3. The rest +of the paper is therefore devoted to the proof of Proposition 2.3. +Remark 2.4. As observed in [8], the conditional regularity results established in Proposition 2.3 +gives rise to stability with respect to the data. More specifically, the maximal existence time Tmax +is a lower semicontinuous function of the data with respect to the topologies in (2.1). +Remark 2.5. Conditional regularity results in combination with the weak–strong uniqueness +principle in the class of measure–valued solutions is an efficient tool for proving convergence of +numerical schemes, see [6, Chapter 11]. The concept of measure–valued solutions to the Navier– +Stokes–Fourier system with inhomogeneous Dirichlet boundary conditions has been introduced +recently by Chaudhuri [1]. +3 +Energy estimates +To begin, it is suitable to extend the boundary data into Ω. For definiteness, we consider the +(unique) solutions of the Dirichlet problem +∆x ˜ϑ = 0 in Ω, ˜ϑ|∂Ω = ϑB, +divxS(Dx˜u) = 0 in Ω, ˜u|∂Ω = uB. +(3.1) +By abuse of notation, we use the same symbol ϑB, uB for both the boundary values and their C1 +extensions ˜ϑ = ˜ϑ(x), ˜u = ˜u(x) inside Ω. +We start with the ballistic energy equality, see [2, Section 2.4], +d +dt +� +Ω +�1 +2̺|u − uB|2 + ̺e − ϑB̺s +� +dx + +� +Ω +ϑB +ϑ +� +S(Dxu) : Dxu + κ|∇xϑ|2 +ϑ +� +dx += − +� +Ω +� +̺u ⊗ u + pI − S(Dxu) +� +: DxuB dx + 1 +2 +� +Ω +̺u · ∇x|uB|2 dx ++ +� +Ω +̺(u − uB) · f dx − +� +Ω +̺su · ∇xϑB dx + κ +� +Ω +∇xϑ +ϑ +· ∇xϑB dx, +(3.2) +where we have introduced the entropy +s = cv log(ϑ) − log(̺). +6 + +Thus the choice (3.1) yields the following bounds +sup +t∈[0,T) +� +Ω +̺| log(ϑ)|(t, ·) dx ≤ c(T, D0, ̺, ϑ, u), +(3.3) +� T +0 +� +Ω +|∇xu|2 dx dt ≤ C(̺, ϑ, u; data) ⇒ +� T +0 +∥u∥2 +W 1,2(Ω;R3) dt ≤ c(T, D0, ̺, ϑ, u), +(3.4) +� T +0 +� +Ω +� +|∇xϑ|2 + |∇x log(ϑ)|2� +dx dt ≤ c(T, D0, ̺, ϑ, u), +⇒ +� T +0 +∥ϑ∥2 +W 1,2(Ω) dt + +� T +0 +∥ log(ϑ)∥2 +W 1,2(Ω) dt ≤ c(T, D0, ̺, ϑ, u). +(3.5) +4 +Estimates of the velocity gradient +This section is the heart of the paper. In principle, we follow the arguments similar to Fang, Zi, +and Zhang [5, Section 3] but here adapted to the inhomogeneous boundary conditions. +4.1 +Estimates of the velocity material derivative +Let us introduce the material derivative of a function g, +Dtg = ∂tg + u · ∇xg. +Accordingly, we may rewrite the momentum equation (1.2) as +̺Dtu + ∇xp = divxS + ̺f. +(4.1) +Now, consider the scalar product of the momentum equation (4.1) with Dt(u − uB), +̺|Dtu|2 + ∇xp · Dt(u − uB) = divxS(Dxu) · Dt(u − uB) + ̺f · Dt(u − uB) + ̺Dtu · DtuB. (4.2) +The next step is integrating (4.2) over Ω. Here and hereafter we use the hypothesis uB·n|∂Ω = 0 +yielding +Dt(u − uB)|∂Ω = (∂tu − u · ∇x(u − uB)) |∂Ω = −uB · ∇x(u − uB)|∂Ω = 0. +(4.3) +Writing +divxS(Dxu) = µ∆xu + +� +η + µ +3 +� +∇xdivxu, +and making use of (4.3) we obtain +� +Ω +divxS(Dxu) · Dt(u − uB) dx +7 + += − +� +Ω +S(Dxu) : ∇x∂tu dx +− µ +� +Ω +∇xu : ∇x +� +u · ∇x(u − uB) +� +dx − +� +η + µ +3 +� � +Ω +divxu divx +� +u · ∇x(u − uB) +� +dx += − 1 +2 +d +dt +� +Ω +S(Dxu) : Dxu dx +− µ +� +Ω +∇xu : ∇x +� +u · ∇x(u − uB) +� +dx − +� +η + µ +3 +� � +Ω +divxu divx +� +u · ∇x(u − uB) +� +dx, +(4.4) +where, furthermore, +� +Ω +∇xu : ∇x(u · ∇xu) dx = +� +Ω +∇xu : (∇xu · ∇xu) dx + 1 +2 +� +Ω +u · ∇x|∇xu|2 dx += +� +Ω +∇xu : (∇xu · ∇xu) dx − 1 +2 +� +Ω +divxu|∇xu|2 dx +(4.5) +Note carefully we have used u · n|∂Ω = 0 in the last integration. Similarly, +� +Ω +divxu divx(u · ∇xu) dx = +� +Ω +divxu ∇xu : ∇t +xu dx − 1 +2 +� +Ω +(divxu)3 dx. +(4.6) +Thus summing up the previous observations, we get +1 +2 +d +dt +� +Ω +S(Dxu) : Dxu dx + 1 +2 +� +Ω +̺|Dtu|2 dx + +� +Ω +∇xp · Dt(u − uB) dx +≤ c(T, D0, ̺, ϑ, u) +� +1 + +� +Ω +|∇xu|3 dx +� +. +(4.7) +Moreover, +� +Ω +∇xp · Dt(u − uB) dx = − +� +Ω +p divx(Dt(u − uB)) dx += − +� +Ω +p divxDtu dx + +� +Ω +p divx(u · ∇xuB) dx, +(4.8) +where +p divxDtu = ∂t(p divxu) − +� +∂tp + divx(pu) +� +divxu + divx(pu)divxu + p divx(u · ∇xu) += ∂t(p divxu) − +� +∂tp + divx(pu) +� +divxu + p∇xu : ∇t +xu + divx +� +pu divxu +� +. +As u · n|∂Ω = 0, we have +� +Ω +divx +� +pu divxu +� +dx = 0, +8 + +and the above estimates together with (4.7) give rise to +1 +2 +d +dt +� +Ω +S(Dxu) : Dxu dx − d +dt +� +Ω +pdivxu dx + 1 +2 +� +Ω +̺|Dtu|2 dx +≤ c(T, D0, ̺, ϑ, u) +� +1 + +� +Ω +|∇xu|3 dx +� +− +� +Ω +� +∂tp + divx(pu) +� +divxu dx. +Finally, we realize +∂tp + divx(pu) = ̺Dtϑ +to conclude +1 +2 +d +dt +� +Ω +S(Dxu) : Dxu dx − d +dt +� +Ω +pdivxu dx + 1 +2 +� +Ω +̺|Dtu|2 dx +≤ c(T, D0, ̺, ϑ, u) +� +1 + +� +Ω +̺|Dtϑ||∇xu| dx + +� +Ω +|∇xu|3 dx +� +. +(4.9) +4.2 +Higher order velocity material derivative estimates +Following [5, Section 3, Lemma 3.3], see also Hoff [12], we deduce +̺D2 +t u + ∇x∂tp + divx(∇xp ⊗ u) += µ +� +∆x∂tu + divx(∆xu ⊗ u) +� ++ +� +η + µ +3 +� � +∇xdivx∂tu + divx ((∇xdivxu) ⊗ u) +� ++ ̺u · ∇xf. +(4.10) +Next, we compute +DtuB = u · ∇xuB, +D2 +t uB = ∂tu · ∇xuB + u · ∇x(u · ∇xuB) += Dtu · ∇xuB − (u · ∇xu) · ∇xuB + u · ∇x(u · ∇xuB) += Dtu · ∇xuB + (u ⊗ u) : ∇2 +xuB. +(4.11) +Consequently, we may rewrite (4.10) in the form +̺D2 +t (u − uB) + ∇x∂tp + divx(∇xp ⊗ u) += µ +� +∆x∂tu + divx(∆xu ⊗ u) +� ++ +� +η + µ +3 +� � +∇xdivx∂tu + divx ((∇xdivxu) ⊗ u) +� ++ ̺u · ∇xf +− ̺Dtu · ∇xuB − ̺(u ⊗ u) : ∇2 +xuB. +(4.12) +The next step is considering the scalar product of (4.12) with Dt(u − uB) and integrating over +Ω. The resulting integrals can be handled as follows: +̺D2 +t (u − uB) · Dt(u − uB) = ̺1 +2Dt|Dt(u − uB)|2 +9 + += 1 +2̺ +� +∂t|Dt(u − uB)|2 + u · ∇x|Dt(u − uB)|2� += 1 +2∂t +� +̺|Dt(u − uB)|2� ++ 1 +2divx +� +̺u|Dt(u − uB)|2� +, +where we have used the equation of continuity (1.1). Seeing that u · n|∂Ω = 0 we get +� +Ω +̺D2 +t (u − uB) · Dt(u − uB) dx = d +dt +1 +2 +� +Ω +̺|Dt(u − uB)|2 dx. +(4.13) +Similarly, +� +Ω +� +∇x∂tp + divx(∇xp ⊗ u) +� +· Dt(u − uB) dx += − +� +Ω +� +∂tp + divx(pu) +� +divxDt(u − uB) dx ++ +� +Ω +� +divx(pu)divxDt(u − uB) − ∇xp ⊗ u : ∇xDt(u − uB) +� +dx, +(4.14) +where +� +Ω +∇xp ⊗ u : ∇xDt(u − uB) dx += − +� +Ω +p∇xu : ∇xDt(u − uB) dx + +� +Ω +∇x(pu) : ∇xDt(u − uB) dx. +In addition, as Dt(u−uB) vanishes on ∂Ω, we can perform by parts integration in the last integral +obtaining +� +Ω +∇x(pu) : ∇xDt(u − uB) dx = +� +Ω +divx(pu)divxDt(u − uB) dx. +Thus, similarly to the preceding section, we conclude +� +Ω +� +∇x∂tp + divx(∇xp ⊗ u) +� +· Dt(u − uB) dx += − +� +Ω +̺DtϑdivxDt(u − uB) dx + +� +Ω +p∇xu : ∇xDt(u − uB) dx. +(4.15) +Analogously, +� +Ω +� +∆x∂tu + divx(∆xu ⊗ u) +� +· Dt(u − uB) dx += − +� +Ω +∇x∂tu : ∇xDt(u − uB) dx − +� +Ω +(∆xu ⊗ u) : ∇xDt(u − uB) dx += − +� +Ω +∇xDtu : ∇xDt(u − uB) dx − +� +Ω +� +∆xu ⊗ u − ∇x(u · ∇xu) +� +: ∇xDt(u − uB) dx, (4.16) +10 + +where, using summation convention, +� +Ω +� +∆xu ⊗ u +� +: ∇xDt(u − uB) dx += +� +Ω +∂xk +� +uj∂xkui +� +∂xjDt(u − uB)i dx − +� +Ω +∂xkui∂xkuj∂xjDt(u − uB)i dx += +� +Ω +∂xj +� +uj∂xkui +� +∂xkDt(u − uB)i dx − +� +Ω +∂xkui∂xkuj∂xjDt(u − uB)i dx += +� +Ω +divxu ∇xu : ∇xDt(u − uB) dx ++ +� +Ω +� +uj∂xk∂xjui +� +∂xkDt(u − uB)i dx − +� +Ω +∂xkui∂xkuj∂xjDt(u − uB)i dx += +� +Ω +∇x(u · ∇xu) : ∇xDt(u − uB) dx + +� +Ω +divxu ∇xu : ∇xDt(u − uB) dx +− +� +Ω +∂xjui∂xkuj∂xkDt(u − uB)i dx − +� +Ω +∂xkui∂xkuj∂xjDt(u − uB)i dx. +(4.17) +Summing up (4.16), (4.17) we conclude +� +Ω +� +∆x∂tu + divx(∆xu ⊗ u) +� +· Dt(u − uB) dx += − +� +Ω +∇xDtu : ∇xDt(u − uB) dx − +� +Ω +divxu ∇xu : ∇xDt(u − uB) dx ++ +� +Ω +∂xjui∂xkuj∂xkDt(u − uB)i dx + +� +Ω +∂xkui∂xkuj∂xjDt(u − uB)i dx. +(4.18) +Estimating the remaining integrals in (4.12) in a similar manner we may infer +1 +2 +d +dt +� +Ω +̺|Dt(u − uB)|2 dx + µ +� +Ω +|∇xDt(u − uB)|2 dx + +� +η + µ +3 +� � +Ω +|divxDt(u − uB)|2 dx +≤ c(T, D0, ̺, ϑ, u) +� +1 + +� +Ω +̺|Dtϑ|2 dx + +� +Ω +|∇xu|4 dx + +� +Ω +̺|Dtu|2 dx +� +. +(4.19) +cf. [5, Section 3, Lemma 3.3]. +4.3 +Velocity decomposition +Following the original idea of Sun, Wang, and Zhang [18], we decompose the velocity field in the +form: +u = v + w, +(4.20) +divxS(Dxv) = ∇xp in (0, T) × Ω, v|∂Ω = 0, +(4.21) +11 + +divxS(Dxw) = ̺Dtu − ̺f in (0, T) × Ω, w|∂Ω = uB. +(4.22) +Since +divxS(Dx∂tv) = ∇x∂tp in (0, T) × Ω, v|∂Ω = 0, +we get +� +Ω +∂tp divxv dx = − +� +Ω +∇x∂tp · v dx = 1 +2 +d +dt +� +Ω +S(Dxv) : Dxv dx. +(4.23) +Moreover, the standard elliptic estimates for the Lam´e operator yield: +∥v∥W 1,q(Ω;R3) ≤ c(q, ̺, ϑ) for all 1 ≤ q < ∞, +(4.24) +∥v∥W 2,q(Ω;R3) ≤ c(q, ̺, ϑ) +� +∥∇x̺∥Lq(Ω;R3) + ∥∇xϑ∥Lq(Ω;R3) +� +, 1 < q < ∞. +(4.25) +Similarly, +∥w∥W 2,2(Ω;R3) ≤ c(T, D0, ̺, ϑ, u) +� +1 + ∥√̺∂tu∥L2(Ω;R3) + ∥∇xu∥L2(Ω;R3×3) +� +. +(4.26) +The estimates (4.24)–(4.26) are uniform in the time interval [0, T). +4.4 +Temperature estimates +Similarly to Fang, Zi, Zhang [5, Section 3, Lemma 3.4] we multiply the internal energy equation +(1.3) on ∂tϑ and integrate over Ω obtaining +cv +� +Ω +̺|Dtϑ|2 dx + κ +2 +d +dt +� +Ω +|∇xϑ|2 dx += cv +� +Ω +̺Dtϑ u · ∇xϑ dx − +� +Ω +̺ϑ divxu Dtϑ dx + +� +Ω +̺ϑ divxu u · ∇xϑ dx ++ d +dt +� +Ω +ϑ S(Dxu) : ∇xu dx +− µ +� +Ω +ϑ +� +∇xu + ∇t +xu − 2 +3divxuI +� +: +� +∇x∂tu + ∇t +x∂tu − 2 +3divx∂tuI +� +dx +− 2η +� +Ω +ϑ divxu divx∂tu dx. +(4.27) +Indeed the term involving the boundary integral is handled as +−κ +� +Ω +∆xϑ ∂tϑ dx = −κ +� +∂Ω +∂tϑB∇xϑ · n dSx + κ +2 +d +dt +� +Ω +|∇xϑ|2 dx, +where +� +∂Ω +∂tϑB∇xϑ · n dSx = 0 +12 + +as the boundary temperature is independent of t. +Similarly to Fang, Zi, Zhang [5, Section 3, Lemma 3.4], we have to show that the intergrals +� +Ω +ϑ ∇xu : ∇x∂tu dx, +� +Ω +ϑ ∇xu : ∇t +x∂tu dx, and +� +Ω +ϑ divxu divx∂tu dx +can be rewritten in the form compatible with (4.19), meaning with the time derivatives replaced +by material derivatives. Fortunately, this step can be carried out in the present setting using only +the boundary condition u · n|∂Ω = 0. Indeed we get +� +Ω +ϑ ∇xu : ∇x∂tu dx = +� +Ω +ϑ ∇xu : ∇x(Dtu) dx − +� +Ω +ϑ ∇xu : ∇x(u · ∇xu) dx, +where +� +Ω +ϑ ∇xu : ∇x(u · ∇xu) dx += +� +Ω +ϑ ∇xu : (∇xu · ∇xu) dx + 1 +2 +� +Ω +ϑ u · ∇x|∇xu|2 dx += +� +Ω +ϑ ∇xu : (∇xu · ∇xu) dx − 1 +2 +� +Ω +|∇xu|2 ∇xϑ · u dx − 1 +2 +� +Ω +|∇xu|2 ϑdivxu dx. +Similarly, +� +Ω +ϑ ∇xu : ∇t +x∂tu dx = +� +Ω +ϑ ∇xu : ∇t +x(Dtu) dx − +� +Ω +ϑ ∇xu : ∇t +x(u · ∇xu) dx, +where +� +Ω +ϑ ∇xu : ∇t +x(u · ∇xu) dx += +� +Ω +ϑ ∇xu : (∇t +xu · ∇t +xu) dx + 1 +2 +� +Ω +ϑ u · ∇x(∇xu : ∇t +xu) dx += +� +Ω +ϑ ∇xu : (∇t +xu · ∇t +xu) dx − 1 +2 +� +Ω +(∇xu : ∇t +xu) ∇xϑ · u dx − 1 +2 +� +Ω +(∇xu : ∇t +xu) ϑdivxu dx. +Finally, +� +Ω +ϑ divxu divx∂tu dx = +� +Ω +ϑ divxu divxDtu dx − +� +Ω +ϑ divxu divx(u · ∇xu) dx, +where +� +Ω +ϑ divxu divx(u · ∇xu) dx +13 + += +� +Ω +ϑ divxu (∇xu : ∇t +xu) dx + 1 +2 +� +Ω +ϑu · ∇x|divxu|2 dx += +� +Ω +ϑ divxu (∇xu : ∇t +xu) dx − 1 +2 +� +Ω +|divxu|2 ∇xϑ · u dx − 1 +2 +� +Ω +|divxu|2 ϑdivxu dx. +We conclude, using (4.7), (4.19), and (4.27), +� +Ω +|∇xϑ|2(τ, ·) dx + +� τ +0 +� +Ω +̺|Dtϑ|2 dx dt +≤ c(T, D0, ̺, ϑ, u) +� +1 + +� τ +0 +� +Ω +|∇xu|4 dx dt +� +. +(4.28) +Next, by virtue of the decomposition u = v + w and the bound (4.24), +� +Ω +|∇xu|4 dx +<∼ +� +Ω +|∇xv|4 dx + +� +Ω +|∇xw|4 dx ≤ c(T, D0, ̺, ϑ, u) +� +1 + +� +Ω +|∇xw|4 dx +� +, +(4.29) +and, similarly, +∥w∥L∞(Ω;R3) ≤ ∥u∥L∞(Ω;R3) + ∥v∥L∞(Ω;R3) ≤ c(T, D0, ̺, ϑ, u). +(4.30) +Recalling the Gagliardo–Nirenberg interpolation inequality in the form +∥∇xU∥2 +L4(Ω;R3) ≤ ∥U∥L∞(Ω)∥∆xU∥L2(Ω) whenever U|∂Ω = 0, +(4.31) +we may use (4.29), (4.30) to rewrite (4.28) in the form +� +Ω +|∇xϑ|2(τ, ·) dx + +� τ +0 +� +Ω +̺|Dtϑ|2 dx dt +≤ c(T, D0, ̺, ϑ, u) +� +1 + +� τ +0 +� +Ω +|∇xϑ|2 dx dt + +� τ +0 +∥w∥2 +W 2,2(Ω;R3) dt +� +. +(4.32) +Finally, we use the elliptic estimates (4.26) to conclude +� +Ω +|∇xϑ|2(τ, ·) dx + +� τ +0 +� +Ω +̺|Dtϑ|2 dx dt +≤ c(T, D0, ̺, ϑ, u) +� +1 + +� τ +0 +� +Ω +� +|∇xϑ|2 + |∇xu|2� +dx dt + +� τ +0 +∥√̺∂tu∥2 +L2(Ω;R3) dt +� +. +(4.33) +Summing up (4.7), (4.19), and (4.33) we may apply Gronwall’s lemma to obtain the following +bounds: +sup +t∈[0,T) +∥u(t, ·)∥W 1,2(Ω;R3) ≤ c(T, D0, ̺, ϑ, u), +(4.34) +sup +t∈[0,T) +∥√̺Dtu(t, ·)∥L2(Ω;R3) ≤ c(T, D0, ̺, ϑ, u), +(4.35) +14 + +sup +t∈[0,T) +∥ϑ(t, ·)∥W 1,2(Ω) ≤ c(T, D0, ̺, ϑ, u), +(4.36) +� T +0 +� +Ω +|∇xDtu|2 dx dt ≤ c(T, D0, ̺, ϑ, u), +(4.37) +� T +0 +� +Ω +̺|Dtϑ|2 dx dt ≤ c(T, D0, ̺, ϑ, u). +(4.38) +Moreover, it follows from (4.24), (4.31), (4.35) +sup +t∈[0,T) +∥∇xu(t, ·)∥L4(Ω;R3×3) ≤ c(T, D0, ̺, ϑ, u). +(4.39) +In addition, (4.38), (4.39) and the standard parabolic estimates applied to the internal energy +balance (1.3) yield +� T +0 +∥ϑ∥2 +W 2,2(Ω) dt ≤ c(T, D0, ̺, ϑ, u). +(4.40) +5 +Second energy bound +It follows from (4.26), (4.35) that +sup +t∈[0,T) +∥w(t, ·)∥W 2,2(Ω;R3) ≤ c(T, D0, ̺, ϑ, u); +(5.1) +whence, by virtue of (4.24) and Sobolev embedding W 1,2(Ω) ֒→ L6(Ω), +sup +t∈[0,T) +∥∇xu(t, ·)∥2 +L6(Ω;R3×3) ≤ c(T, D0, ̺, ϑ, u). +(5.2) +Moreover, as a consequence of (4.37), Dtu is bounded in L2(L6), which, combined with (5.2), gives +rise to +� T +0 +∥∂tu∥2 +L6(Ω;R3) dt ≤ c(T, D0, ̺, ϑ, u). +(5.3) +Finally, going back to (4.22) we conclude +� T +0 +∥w∥2 +W 2,6(Ω;R3) dt ≤ c(T, D0, ̺, ϑ, u), +(5.4) +and +� T +0 +∥u∥2 +W 1,q(Ω;R3) dt ≤ c(T, D0, ̺, ϑ, u, q) for any 1 ≤ q < ∞. +(5.5) +15 + +6 +Estimates of the derivatives of the density +Using (5.4), (5.5), we may proceed as in [19, Section 5] to deduce the bounds +supt∈[0,T) +� +∥∂t̺(t, ·)∥L6(Ω) + ∥̺(t, ·)∥W 1,6(Ω) +� +≤ c(T, D0, ̺, ϑ, u). +(6.1) +Revisiting the momentum equation (1.2) we use (6.1) together with the other bounds established +above to obtain +� T +0 +∥u∥2 +W 2,6(Ω;R3) dt ≤ c(T, D0, ̺, ϑ, u). +(6.2) +6.1 +Positivity of the density and temperature +It follows from (6.2) that divxu is bounded in L1(0, T; L∞(Ω)). Thus the equation of continuity +(1.1) yields a positive lower bound on the density +inf +(t,x)∈[0,T)×Ω ̺(t, x) ≥ ̺ > 0, +(6.3) +where the lower bound depends on the data as well as on the length T of the time interval. +Similarly, rewriting the internal energy balance equation (1.3) in the form +cv (∂tϑ + u · ∇xϑ) − κ +̺∆xϑ = 1 +̺S : Dxu − ϑdivxu +(6.4) +we may apply the standard parabolic maximum/minimum principle to deduce +inf +(t,x)∈[0,T)×Ω ϑ(t, x) ≥ ϑ > 0. +(6.5) +7 +Parabolic regularity for the heat equation +We rewrite the parabolic equation (6.4) in terms of Θ = ϑ − ϑB. Recalling ∆xϑB = 0 we get +cv (∂tΘ + u · ∇xϑ) − κ +̺∆xΘ = 1 +̺S : Dxu − ϑdivxu +(7.1) +with the homogeneous Dirichlet boundary conditions +Θ|∂Ω = 0. +(7.2) +Now, we can apply all arguments of [10, Sections 4.6, 4.7] to Θ obtaining the bounds +∥ϑ∥Cα([0,T]×Ω) ≤ c(T, D0, ̺, ϑ, u) for some α > 0, +(7.3) +∥ϑ∥Lp(0,T;W 2,3(Ω)) + ∥∂tϑ∥Lp(0,T;L3(Ω)) ≤ c(T, D0, ̺, ϑ, u) for all 1 ≤ p < ∞, +(7.4) +together with +∥u∥Lp(0,T;W 2,6(Ω;R3)) + ∥∂tu∥Lp(0,T;L6(Ω;R3)) ≤ c(T, D0, ̺, ϑ, u) for any 1 ≤ p < ∞. +(7.5) +16 + +8 +Final estimates +The bounds (7.5) imply, in particular, +sup +(t,x)∈[0,T)×Ω +|∇xu(t, x)| ≤ c(T, D0, ̺, ϑ, u). +(8.1) +Thus the desired higher order estimates can be obtained exactly as in [9, Section 4.6]. Indeed +the arguments of [9, Section 4.6] are based on differentiating the equation (7.1) with respect to +time which gives rise to a parabolic problem for ∂tϑ with the homogeneous Dirichlet boundary +conditions ∂tϑ|∂Ω = 0. Indeed we get +cv∂2 +ttϑ + cvu · ∇x∂tϑ − κ +̺∆x∂tϑ =−cv∂tu · ∇xϑ − 1 +̺2∂t̺ (κ∆xϑ + S(Dxu) : Dxu) ++2 +̺ S(Dxu) : Dx∂tu − ∂tϑ divxu − ϑ divx∂tu. +The estimates obtained in the previous sections imply that the right–hand side of the above +equation is bounded in L2(0, T; L2(Ω)). Thus multiplying the equation on ∆x∂tϑ and performing +the standard by parts integration, we get the desired estimates as in [9, Section 4.6]. +The remaining estimates are obtained exactly as in [9, Section 4.6] : +sup +t∈[0,T) +∥ϑ(t, ·)∥W 3,2(Ω) + sup +t∈[0,T) +∥∂tϑ(t, ·)∥W 1,2(Ω) ≤ c(T, D0, ̺, ϑ, u), +(8.2) +� T +0 +� +∥∂tϑ∥2 +W 2,2(Ω) + ∥ϑ∥2 +W 4,2(Ω) +� +dt ≤ c(T, D0, ̺, ϑ, u), +(8.3) +sup +t∈[0,T) +∥u(t, ·)∥W 3,2(Ω;R3) + sup +t∈[0,T) +∥∂tu(t, ·)∥W 1,2(Ω;R3) ≤ c(T, D0, ̺, ϑ, u), +(8.4) +� T +0 +� +∥∂tu∥2 +W 2,2(Ω;R3) + ∥u∥2 +W 4,2(Ω;R3) +� +dt ≤ c(T, D0, ̺, ϑ, u), +(8.5) +and +sup +t∈[0,T) +∥̺(t, ·)∥W 3,2(Ω) ≤ c(T, D0, ̺, ϑ, u). +(8.6) +We have completed the proof of Proposition 2.3. +References +[1] N. Chaudhuri. On weak(measure valued)–strong uniqueness for Navier–Stokes–Fourier system +with Dirichlet boundary condition. +Archive Preprint Series, 2022. +arxiv preprint No. +2207.00991. +[2] N. Chaudhuri and E. Feireisl. Navier-Stokes-Fourier system with Dirichlet boundary condi- +tions. Appl. Anal., 101(12):4076–4094, 2022. +17 + +[3] P. A. Davidson. Turbulence:An introduction for scientists and engineers. Oxford University +Press, Oxford, 2004. +[4] J. Fan, S. Jiang, and Y. Ou. A blow-up criterion for compressible viscous heat-conductive +flows. Ann. Inst. H. Poincar´e Anal. Non Lin´eaire, 27(1):337–350, 2010. +[5] D. Fang, R. Zi, and T. Zhang. A blow-up criterion for two dimensional compressible viscous +heat-conductive flows. Nonlinear Anal., 75(6):3130–3141, 2012. +[6] E. Feireisl, M. Luk´aˇcov´a-Medviˇdov´a, H. Mizerov´a, and B. She. Numerical analysis of com- +pressible fluid flows. Springer-Verlag, Cham, 2022. +[7] E. Feireisl and M. Luk´aˇcov´a-Medviˇdov´a. Convergence of a stochastic collocation finite volume +method for the compressible Navier–Stokes system. Archive Preprint Series, 2021. arxiv +preprint No.2111.07435. +[8] E. Feireisl and M. Luk´aˇcov´a-Medviˇdov´a. Statistical solutions for the Navier–Stokes–Fourier +system. Archive Preprint Series, 2022. arxiv preprint No. 2212.06784. +[9] E. Feireisl, A. Novotn´y, and Y. Sun. A regularity criterion for the weak solutions to the +Navier-Stokes-Fourier system. Arch. Ration. Mech. Anal., 212(1):219–239, 2014. +[10] E. Feireisl and Y. Sun. Conditional regularity of very weak solutions to the Navier-Stokes- +Fourier system. +In Recent advances in partial differential equations and applications, vol- +ume 666 of Contemp. Math., pages 179–199. Amer. Math. Soc., Providence, RI, 2016. +[11] E. Feireisl, H. Wen, and C. Zhu. On Nash’s conjecture for models of viscous, compressible, +and heat conducting fluids. IM ASCR Prague, preprint No. IM 2022 6, 2022. +[12] D. Hoff. Global solutions of the Navier-Stokes equations for multidimensional compressible +flow with discontinuous initial data. J. Differential Equations, 120:215–254, 1995. +[13] X. Huang and J. Li. Serrin-type blowup criterion for viscous, compressible, and heat conduct- +ing Navier-Stokes and magnetohydrodynamic flows. Comm. Math. Phys., 324(1):147–171, +2013. +[14] X. Huang, J. Li, and Y. Wang. Serrin-type blowup criterion for full compressible Navier-Stokes +system. Arch. Ration. Mech. Anal., 207(1):303–316, 2013. +[15] Q. Jiu, Y. Wang, and Y. Ye. Refined blow-up criteria for the full compressible Navier-Stokes +equations involving temperature. J. Evol. Equ., 21(2):1895–1916, 2021. +[16] F. Merle, P. Rapha¨el, I. Rodnianski, and J. Szeftel. On the implosion of a compressible fluid +I: smooth self-similar inviscid profiles. Ann. of Math. (2), 196(2):567–778, 2022. +18 + +[17] F. Merle, P. Rapha¨el, I. Rodnianski, and J. Szeftel. On the implosion of a compressible fluid +II: singularity formation. Ann. of Math. (2), 196(2):779–889, 2022. +[18] Y. Sun, C. Wang, and Z. Zhang. A Beale-Kato-Majda criterion for the 3-D compressible +Navier-Stokes equations. J. Math. Pures Appl., 95(1):36–47, 2011. +[19] Y. Sun, C. Wang, and Z. Zhang. A Beale-Kato-Majda criterion for three dimensional com- +pressible viscous heat-conductive flows. Arch. Ration. Mech. Anal., 201(2):727–742, 2011. +[20] A. Valli. A correction to the paper: “An existence theorem for compressible viscous fluids” +[Ann. Mat. Pura Appl. (4) 130 (1982), 197–213; MR 83h:35112]. Ann. Mat. Pura Appl. (4), +132:399–400 (1983), 1982. +[21] A. Valli. An existence theorem for compressible viscous fluids. Ann. Mat. Pura Appl. (4), +130:197–213, 1982. +[22] A. Valli and M. Zajaczkowski. Navier-Stokes equations for compressible fluids: Global exis- +tence and qualitative properties of the solutions in the general case. Commun. Math. Phys., +103:259–296, 1986. +[23] H. Wen and C. Zhu. Blow-up criterions of strong solutions to 3D compressible Navier-Stokes +equations with vacuum. Adv. Math., 248:534–572, 2013. +[24] H. Wen and C. Zhu. +Global solutions to the three-dimensional full compressible Navier- +Stokes equations with vacuum at infinity in some classes of large data. SIAM J. Math. Anal., +49(1):162–221, 2017. +19 + diff --git a/QNFRT4oBgHgl3EQfJje8/content/tmp_files/load_file.txt b/QNFRT4oBgHgl3EQfJje8/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..b3049ab3e7a7148c5e6db69c7544f2d3b3dbde55 --- /dev/null +++ b/QNFRT4oBgHgl3EQfJje8/content/tmp_files/load_file.txt @@ -0,0 +1,597 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf,len=596 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='13496v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='AP] 31 Jan 2023 Conditional regularity for the Navier–Stokes–Fourier system with Dirichlet boundary conditions Danica Basari´c ∗ Eduard Feireisl ∗ Hana Mizerov´a ∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='† ∗ Institute of Mathematics of the Czech Academy of Sciences ˇZitn´a 25,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' CZ-115 67 Praha 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Czech Republic † Department of Mathematical Analysis and Numerical Mathematics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Comenius University Mlynsk´a dolina,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' 842 48 Bratislava,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Slovakia Abstract We consider the Navier–Stokes–Fourier system with the inhomogeneous boundary condi- tions for the velocity and the temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' We show that solutions emanating from sufficiently regular data remain regular as long as the density ̺, the absolute temperature ϑ, and the modulus of the fluid velocity |u| remain bounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Keywords: Navier–Stokes–Fourier system, conditional regularity, blow–up criterion, regular solution 1 Introduction Standard systems of equations in fluid mechanics including the Navier–Stokes–Fourier system governing the motion of a compressible, viscous, and heat conducting fluid are well posed in the class of strong solutions on a possibly short time interval [0, Tmax).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' The recent results of Merle at al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [16], [17] strongly indicate that Tmax may be finite, at least in the idealized case of “isentropic” viscous flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Conditional regularity results guarantee that a blow up will not occur as soon as some lower order norms of solutions are controlled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' We consider the Navier–Stokes–Fourier system governing the time evolution of the mass density ̺ = ̺(t, x), the (absolute) temperature ϑ = ϑ(t, x), and the velocity u = u(t, x) of a compressible, viscous, and heat conducting fluid: ∗The work of D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=', E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=', and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' was supported by the Czech Sciences Foundation (GAˇCR), Grant Agreement 21–02411S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' The Institute of Mathematics of the Czech Academy of Sciences is supported by RVO:67985840.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' 1 ∂t̺ + divx(̺u) = 0, (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1) ∂t(̺u) + divx(̺u ⊗ u) + ∇xp(̺, ϑ) = divxS(Dxu) + ̺f, Dxu = 1 2 � ∇xu + ∇t xu � , (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2) ∂t(̺e(̺, ϑ)) + divx(̺e(̺, ϑ)u) + divxq(∇xϑ) = S(Dxu) : Dxu − p(̺, ϑ)divxu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3) The fluid is Newtonian, the viscous stress S is given by Newton’s rheological law S(Dxu) = 2µ � Dxu − 1 3divxuI � + ηdivxuI, µ > 0, η ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='4) The heat flux obeys Fourier’s law q(∇xϑ) = −κ∇xϑ, κ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='5) The equation of state for the pressure p and the internal energy e is given by the standard Boyle– Mariotte law of perfect gas, p(̺, ϑ) = ̺ϑ, e(̺, ϑ) = cvϑ, cv > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='6) For the sake of simplicity, we suppose that the viscosity coefficients µ, η, the heat conductivity coefficient κ as well as the specific heat at constant volume cv are constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' There is a large number of recent results concerning conditional regularity for the Navier– Stokes–Fourier system in terms of various norms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Fan, Jiang, and Ou [4] consider a bounded fluid domain Ω ⊂ R3 with the conservative boundary conditions u|∂Ω = 0, ∇xϑ · n|∂Ω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='7) The same problem is studied by Sun, Wang, and Zhang [19] and later by Huang, Li, Wang [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' There are results for the Cauchy problem Ω = R3 by Huang and Li [13], and Jiu, Wang and Ye [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Possibly the best result so far has been established in [11], where the blow up criterion for both the Cauchy problem and the boundary value problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='7) is formulated in terms of the maximum of the density and a Serrin type regularity for the temperature: lim sup t→Tmax− � ∥̺(t, ·)∥L∞ + ∥ϑ − ϑ∞∥Ls(0,t)(Lr) � = ∞, 3 2 < r ≤ ∞, 1 ≤ s ≤ ∞, 2 s + 3 r ≤ 2, where ϑ∞ denotes the far field temperature in the Cauchy problem, cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' also the previous results by Wen and Zhu [23], [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Much less is known in the case of the Dirichlet boundary conditions u|∂Ω = uB, ϑ|∂Ω = ϑB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='8) 2 Fan, Zhi, and Zhang [5] showed that a strong solution of the Navier–Stokes–Fourier system remains regular up to a time T > 0 if (i) Ω ⊂ R2 is a bounded domain, (ii) uB = 0, ϑB = 0, and (iii) lim sup t→T− (∥̺∥L∞ + ∥ϑ∥L∞) < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='9) All results mentioned above describe fluids in a conservative regime, meaning solutions are close to equilibrium in the long run.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' However, many real world applications concern fluids out of equilibrium driven by possibly large driving forces f and/or inhomogeneous boundary conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' The iconic examples are the Rayleigh–B´enard and Taylor–Couette flows where the fluid is driven to a turbulent regime by a large temperature gradient and large boundary velocity, respectively, see Davidson [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Motivated by these physically relevant examples, we consider a fluid confined to a bounded domain Ω ⊂ R3 with impermeable boundary, where the temperature and the (tangential) velocity are given on ∂Ω, ϑ|∂Ω = ϑB, ϑB = ϑB(x), ϑB > 0 on ∂Ω, (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='10) u|∂Ω = uB, uB = uB(x), uB · n = 0 on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='11) The initial state of the fluid is prescribed: ̺(0, ·) = ̺0, ̺0 > 0 in Ω, ϑ(0, ·) = ϑ0, ϑ0 > 0 in Ω, u(0, ·) = u0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='12) The initial and boundary data are supposed to satisfy suitable compatibility conditions specified below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' The existence of local in time strong solutions for the problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1)–(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='6), endowed with the inhomogeneous boundary conditions (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='10), (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='11) was established by Valli [20], [21] , see also Valli and Zajaczkowski [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' The solution exists on a maximal time interval [0, Tmax), Tmax > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Our goal is to show that if Tmax < ∞, then necessarily lim sup t→Tmax− � ∥̺(t, ·)∥L∞(Ω) + ∥ϑ(t, ·)∥L∞(Ω) + ∥u(t, ·)∥L∞(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) � = ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='13) The proof is based on deriving suitable a priori bounds assuming boundedness of all norms involved in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='13) as well as the norm of the initial/boundary data in a suitable function space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Although approach shares some similarity with Fang, Zi, and Zhang [5], essential modifications must be made to accommodate the inhomogeneous boundary data as well as the driving force f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' The importance of conditional regularity results in numerical analysis of flows with uncertain initial data was discussed recently in [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' 3 The paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' In Section 2, we introduce the class of strong solutions to the Navier–Stokes–Fourier system and state our main result concerning conditional regularity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' The remaining part of the paper is devoted to the proof of the main result – deriving suitable a priori bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' In Section 3 we recall the standard energy estimates that hold even in the class of weak solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Section 4 is the heart of the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' We establish the necessary estimates on the velocity gradient by means of the celebrated Gagliardo–Nirenberg interpolation inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' In Section 5, higher order estimates on the velocity gradient are derived, and, finally, the estimates are closed by proving bounds on the temperature time derivative in Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' This last part borrows the main ideas from [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' 2 Strong solutions, main result We start the analysis by recalling the concept of strong solution introduced by Valli [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Similarly to the boundary data uB, ϑB we suppose that the driving force f = f(x) is independent of time, meaning we deal with an autonomous problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Following [21], we suppose that Ω ⊂ R3 is a bounded domain with ∂Ω of class C4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' We assume the data belong to the following class: ̺0 ∈ W 3,2(Ω), 0 < ̺0 ≤ min x∈Ω ̺0(x), ϑ0 ∈ W 3,2(Ω), 0 < ϑ0 ≤ min x∈Ω ϑ0(x), u0 ∈ W 3,2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' R3), ϑB ∈ W 7 2(∂Ω), 0 < ϑB ≤ min x∈∂Ω ϑB(x), uB ∈ W 7 2(∂Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' R3), uB · n = 0, f ∈ W 2,2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' R3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1) In addition, the data must satisfy the compatibility conditions ϑ0 = ϑB, u0 = uB on ∂Ω, ̺0u0 · ∇xu0 + ∇xp(̺0, ϑ0) = divxS(Dxu0) + ̺0f on ∂Ω, ̺0u0 · ∇xϑ0 + divxq(ϑ0) = S(Dxu0) : Dxu0 − p(̺0, ϑ0)divxu0 on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2) We set D0 = max � ∥(̺0, ϑ0, u0)∥W 3,2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R5), 1 ̺0 , 1 ϑ0 , 1 ϑB , ∥ϑB∥W 7 2 (∂Ω), ∥uB∥W 7 2 (∂Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3), ∥f∥W 2,2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3) 4 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1 Local existence The following result was proved by Valli [21, Theorem A] (see also [20]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (Local existence of strong solutions) Let Ω ⊂ R3 be a bounded domain of class C4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Suppose that the data (̺0, ϑ0, u0), (ϑB, uB) and f belong to the class (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1) and satisfy the compatibility conditions (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Then there exists a maximal time Tmax > 0 such that the Navier–Stokes–Fourier system (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1)– (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='6), with the boundary conditions (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='10), (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='11), and the initial conditions (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='12) admits a solu- tion (̺, ϑ, u) in [0, Tmax) × Ω unique in the class ̺, ϑ ∈ C([0, T];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' W 3,2(Ω)), u ∈ C([0, T];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' W 3,2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' R3)), ϑ ∈ L2(0, T;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' W 4,2(Ω)), u ∈ L2(0, T;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' W 4,2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' R3)) (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='4) for any 0 < T < Tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' The existence time Tmax is bounded below by a quantity c(D0) depending solely on the norms of the data specified in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' In particular, lim τ→Tmax− ∥(̺, ϑ, u)(τ, ·)∥W 3,2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R5) = ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='5) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2 Blow up criterion, conditional regularity Our goal is to show the following result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (Blow up criterion) Under the hypotheses of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1, suppose that the maximal existence time Tmax < ∞ is finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Then lim sup τ→Tmax− ∥(̺, ϑ, u)(τ, ·)∥L∞(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R5) = ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='6) Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2 is in the spirit of the blow up criteria for general parabolic systems – the solution remains regular as long as it is bounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Of course, our problem in question is of mixed hyperbolic– parabolic type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' The proof of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2 follows from suitable a priori bounds applied on a compact time interval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (Conditional regularity) Under the hypotheses of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1, let (̺, ϑ, u) be the strong solution of the Navier–Stokes– Fourier system belonging to the class (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='4) and satisfying sup (τ,x)∈[0,T)×Ω ̺(τ, x) ≤ ̺, sup (τ,x)∈[0,T)×Ω ϑ(τ, x) ≤ ϑ, sup (τ,x)∈[0,T)×Ω |u(τ, x)| ≤ u (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='7) 5 for some T < Tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Then there is a quantity c(T, D0, ̺, ϑ, u), bounded for bounded arguments, such that sup τ∈[0,T) max � ∥(̺, ϑ, u)(τ, ·)∥W 3,2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R5);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' sup x∈Ω 1 ̺(τ, x);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' sup x∈Ω 1 ϑ(τ, x) � ≤ c(T, D0, ̺, ϑ, u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='8) In view of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1, the conclusion of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2 follows from Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' The rest of the paper is therefore devoted to the proof of Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' As observed in [8], the conditional regularity results established in Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3 gives rise to stability with respect to the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' More specifically, the maximal existence time Tmax is a lower semicontinuous function of the data with respect to the topologies in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Conditional regularity results in combination with the weak–strong uniqueness principle in the class of measure–valued solutions is an efficient tool for proving convergence of numerical schemes, see [6, Chapter 11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' The concept of measure–valued solutions to the Navier– Stokes–Fourier system with inhomogeneous Dirichlet boundary conditions has been introduced recently by Chaudhuri [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' 3 Energy estimates To begin, it is suitable to extend the boundary data into Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' For definiteness, we consider the (unique) solutions of the Dirichlet problem ∆x ˜ϑ = 0 in Ω, ˜ϑ|∂Ω = ϑB, divxS(Dx˜u) = 0 in Ω, ˜u|∂Ω = uB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1) By abuse of notation, we use the same symbol ϑB, uB for both the boundary values and their C1 extensions ˜ϑ = ˜ϑ(x), ˜u = ˜u(x) inside Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' We start with the ballistic energy equality, see [2, Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='4], d dt � Ω �1 2̺|u − uB|2 + ̺e − ϑB̺s � dx + � Ω ϑB ϑ � S(Dxu) : Dxu + κ|∇xϑ|2 ϑ � dx = − � Ω � ̺u ⊗ u + pI − S(Dxu) � : DxuB dx + 1 2 � Ω ̺u · ∇x|uB|2 dx + � Ω ̺(u − uB) · f dx − � Ω ̺su · ∇xϑB dx + κ � Ω ∇xϑ ϑ ∇xϑB dx, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2) where we have introduced the entropy s = cv log(ϑ) − log(̺).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' 6 Thus the choice (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1) yields the following bounds sup t∈[0,T) � Ω ̺| log(ϑ)|(t, ·) dx ≤ c(T, D0, ̺, ϑ, u), (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3) � T 0 � Ω |∇xu|2 dx dt ≤ C(̺, ϑ, u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' data) ⇒ � T 0 ∥u∥2 W 1,2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) dt ≤ c(T, D0, ̺, ϑ, u), (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='4) � T 0 � Ω � |∇xϑ|2 + |∇x log(ϑ)|2� dx dt ≤ c(T, D0, ̺, ϑ, u), ⇒ � T 0 ∥ϑ∥2 W 1,2(Ω) dt + � T 0 ∥ log(ϑ)∥2 W 1,2(Ω) dt ≤ c(T, D0, ̺, ϑ, u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='5) 4 Estimates of the velocity gradient This section is the heart of the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' In principle, we follow the arguments similar to Fang, Zi, and Zhang [5, Section 3] but here adapted to the inhomogeneous boundary conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1 Estimates of the velocity material derivative Let us introduce the material derivative of a function g, Dtg = ∂tg + u · ∇xg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Accordingly, we may rewrite the momentum equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2) as ̺Dtu + ∇xp = divxS + ̺f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1) Now, consider the scalar product of the momentum equation (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1) with Dt(u − uB), ̺|Dtu|2 + ∇xp · Dt(u − uB) = divxS(Dxu) · Dt(u − uB) + ̺f · Dt(u − uB) + ̺Dtu · DtuB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2) The next step is integrating (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2) over Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Here and hereafter we use the hypothesis uB·n|∂Ω = 0 yielding Dt(u − uB)|∂Ω = (∂tu − u · ∇x(u − uB)) |∂Ω = −uB · ∇x(u − uB)|∂Ω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3) Writing divxS(Dxu) = µ∆xu + � η + µ 3 � ∇xdivxu, and making use of (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3) we obtain � Ω divxS(Dxu) · Dt(u − uB) dx 7 = − � Ω S(Dxu) : ∇x∂tu dx − µ � Ω ∇xu : ∇x � u · ∇x(u − uB) � dx − � η + µ 3 � � Ω divxu divx � u · ∇x(u − uB) � dx = − 1 2 d dt � Ω S(Dxu) : Dxu dx − µ � Ω ∇xu : ∇x � u · ∇x(u − uB) � dx − � η + µ 3 � � Ω divxu divx � u · ∇x(u − uB) � dx, (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='4) where, furthermore, � Ω ∇xu : ∇x(u · ∇xu) dx = � Ω ∇xu : (∇xu · ∇xu) dx + 1 2 � Ω u · ∇x|∇xu|2 dx = � Ω ∇xu : (∇xu · ∇xu) dx − 1 2 � Ω divxu|∇xu|2 dx (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='5) Note carefully we have used u · n|∂Ω = 0 in the last integration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Similarly, � Ω divxu divx(u · ∇xu) dx = � Ω divxu ∇xu : ∇t xu dx − 1 2 � Ω (divxu)3 dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='6) Thus summing up the previous observations, we get 1 2 d dt � Ω S(Dxu) : Dxu dx + 1 2 � Ω ̺|Dtu|2 dx + � Ω ∇xp · Dt(u − uB) dx ≤ c(T, D0, ̺, ϑ, u) � 1 + � Ω |∇xu|3 dx � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='7) Moreover, � Ω ∇xp · Dt(u − uB) dx = − � Ω p divx(Dt(u − uB)) dx = − � Ω p divxDtu dx + � Ω p divx(u · ∇xuB) dx, (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='8) where p divxDtu = ∂t(p divxu) − � ∂tp + divx(pu) � divxu + divx(pu)divxu + p divx(u · ∇xu) = ∂t(p divxu) − � ∂tp + divx(pu) � divxu + p∇xu : ∇t xu + divx � pu divxu � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' As u · n|∂Ω = 0, we have � Ω divx � pu divxu � dx = 0, 8 and the above estimates together with (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='7) give rise to 1 2 d dt � Ω S(Dxu) : Dxu dx − d dt � Ω pdivxu dx + 1 2 � Ω ̺|Dtu|2 dx ≤ c(T, D0, ̺, ϑ, u) � 1 + � Ω |∇xu|3 dx � − � Ω � ∂tp + divx(pu) � divxu dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Finally, we realize ∂tp + divx(pu) = ̺Dtϑ to conclude 1 2 d dt � Ω S(Dxu) : Dxu dx − d dt � Ω pdivxu dx + 1 2 � Ω ̺|Dtu|2 dx ≤ c(T, D0, ̺, ϑ, u) � 1 + � Ω ̺|Dtϑ||∇xu| dx + � Ω |∇xu|3 dx � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='9) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2 Higher order velocity material derivative estimates Following [5, Section 3, Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3], see also Hoff [12], we deduce ̺D2 t u + ∇x∂tp + divx(∇xp ⊗ u) = µ � ∆x∂tu + divx(∆xu ⊗ u) � + � η + µ 3 � � ∇xdivx∂tu + divx ((∇xdivxu) ⊗ u) � + ̺u · ∇xf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='10) Next, we compute DtuB = u · ∇xuB, D2 t uB = ∂tu · ∇xuB + u · ∇x(u · ∇xuB) = Dtu · ∇xuB − (u · ∇xu) · ∇xuB + u · ∇x(u · ∇xuB) = Dtu · ∇xuB + (u ⊗ u) : ∇2 xuB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='11) Consequently, we may rewrite (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='10) in the form ̺D2 t (u − uB) + ∇x∂tp + divx(∇xp ⊗ u) = µ � ∆x∂tu + divx(∆xu ⊗ u) � + � η + µ 3 � � ∇xdivx∂tu + divx ((∇xdivxu) ⊗ u) � + ̺u · ∇xf − ̺Dtu · ∇xuB − ̺(u ⊗ u) : ∇2 xuB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='12) The next step is considering the scalar product of (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='12) with Dt(u − uB) and integrating over Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' The resulting integrals can be handled as follows: ̺D2 t (u − uB) · Dt(u − uB) = ̺1 2Dt|Dt(u − uB)|2 9 = 1 2̺ � ∂t|Dt(u − uB)|2 + u · ∇x|Dt(u − uB)|2� = 1 2∂t � ̺|Dt(u − uB)|2� + 1 2divx � ̺u|Dt(u − uB)|2� , where we have used the equation of continuity (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Seeing that u · n|∂Ω = 0 we get � Ω ̺D2 t (u − uB) · Dt(u − uB) dx = d dt 1 2 � Ω ̺|Dt(u − uB)|2 dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='13) Similarly, � Ω � ∇x∂tp + divx(∇xp ⊗ u) � Dt(u − uB) dx = − � Ω � ∂tp + divx(pu) � divxDt(u − uB) dx + � Ω � divx(pu)divxDt(u − uB) − ∇xp ⊗ u : ∇xDt(u − uB) � dx, (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='14) where � Ω ∇xp ⊗ u : ∇xDt(u − uB) dx = − � Ω p∇xu : ∇xDt(u − uB) dx + � Ω ∇x(pu) : ∇xDt(u − uB) dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' In addition, as Dt(u−uB) vanishes on ∂Ω, we can perform by parts integration in the last integral obtaining � Ω ∇x(pu) : ∇xDt(u − uB) dx = � Ω divx(pu)divxDt(u − uB) dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Thus, similarly to the preceding section, we conclude � Ω � ∇x∂tp + divx(∇xp ⊗ u) � Dt(u − uB) dx = − � Ω ̺DtϑdivxDt(u − uB) dx + � Ω p∇xu : ∇xDt(u − uB) dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='15) Analogously, � Ω � ∆x∂tu + divx(∆xu ⊗ u) � Dt(u − uB) dx = − � Ω ∇x∂tu : ∇xDt(u − uB) dx − � Ω (∆xu ⊗ u) : ∇xDt(u − uB) dx = − � Ω ∇xDtu : ∇xDt(u − uB) dx − � Ω � ∆xu ⊗ u − ∇x(u · ∇xu) � : ∇xDt(u − uB) dx, (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='16) 10 where,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' using summation convention,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' � Ω � ∆xu ⊗ u � : ∇xDt(u − uB) dx = � Ω ∂xk � uj∂xkui � ∂xjDt(u − uB)i dx − � Ω ∂xkui∂xkuj∂xjDt(u − uB)i dx = � Ω ∂xj � uj∂xkui � ∂xkDt(u − uB)i dx − � Ω ∂xkui∂xkuj∂xjDt(u − uB)i dx = � Ω divxu ∇xu : ∇xDt(u − uB) dx + � Ω � uj∂xk∂xjui � ∂xkDt(u − uB)i dx − � Ω ∂xkui∂xkuj∂xjDt(u − uB)i dx = � Ω ∇x(u · ∇xu) : ∇xDt(u − uB) dx + � Ω divxu ∇xu : ∇xDt(u − uB) dx − � Ω ∂xjui∂xkuj∂xkDt(u − uB)i dx − � Ω ∂xkui∂xkuj∂xjDt(u − uB)i dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='17) Summing up (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='16), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='17) we conclude � Ω � ∆x∂tu + divx(∆xu ⊗ u) � Dt(u − uB) dx = − � Ω ∇xDtu : ∇xDt(u − uB) dx − � Ω divxu ∇xu : ∇xDt(u − uB) dx + � Ω ∂xjui∂xkuj∂xkDt(u − uB)i dx + � Ω ∂xkui∂xkuj∂xjDt(u − uB)i dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='18) Estimating the remaining integrals in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='12) in a similar manner we may infer 1 2 d dt � Ω ̺|Dt(u − uB)|2 dx + µ � Ω |∇xDt(u − uB)|2 dx + � η + µ 3 � � Ω |divxDt(u − uB)|2 dx ≤ c(T, D0, ̺, ϑ, u) � 1 + � Ω ̺|Dtϑ|2 dx + � Ω |∇xu|4 dx + � Ω ̺|Dtu|2 dx � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='19) cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [5, Section 3, Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3 Velocity decomposition Following the original idea of Sun, Wang, and Zhang [18], we decompose the velocity field in the form: u = v + w, (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='20) divxS(Dxv) = ∇xp in (0, T) × Ω, v|∂Ω = 0, (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='21) 11 divxS(Dxw) = ̺Dtu − ̺f in (0, T) × Ω, w|∂Ω = uB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='22) Since divxS(Dx∂tv) = ∇x∂tp in (0, T) × Ω, v|∂Ω = 0, we get � Ω ∂tp divxv dx = − � Ω ∇x∂tp · v dx = 1 2 d dt � Ω S(Dxv) : Dxv dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='23) Moreover, the standard elliptic estimates for the Lam´e operator yield: ∥v∥W 1,q(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) ≤ c(q, ̺, ϑ) for all 1 ≤ q < ∞, (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='24) ∥v∥W 2,q(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) ≤ c(q, ̺, ϑ) � ∥∇x̺∥Lq(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) + ∥∇xϑ∥Lq(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) � , 1 < q < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='25) Similarly, ∥w∥W 2,2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) ≤ c(T, D0, ̺, ϑ, u) � 1 + ∥√̺∂tu∥L2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) + ∥∇xu∥L2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3×3) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='26) The estimates (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='24)–(4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='26) are uniform in the time interval [0, T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='4 Temperature estimates Similarly to Fang, Zi, Zhang [5, Section 3, Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='4] we multiply the internal energy equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3) on ∂tϑ and integrate over Ω obtaining cv � Ω ̺|Dtϑ|2 dx + κ 2 d dt � Ω |∇xϑ|2 dx = cv � Ω ̺Dtϑ u · ∇xϑ dx − � Ω ̺ϑ divxu Dtϑ dx + � Ω ̺ϑ divxu u · ∇xϑ dx + d dt � Ω ϑ S(Dxu) : ∇xu dx − µ � Ω ϑ � ∇xu + ∇t xu − 2 3divxuI � : � ∇x∂tu + ∇t x∂tu − 2 3divx∂tuI � dx − 2η � Ω ϑ divxu divx∂tu dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='27) Indeed the term involving the boundary integral is handled as −κ � Ω ∆xϑ ∂tϑ dx = −κ � ∂Ω ∂tϑB∇xϑ · n dSx + κ 2 d dt � Ω |∇xϑ|2 dx, where � ∂Ω ∂tϑB∇xϑ · n dSx = 0 12 as the boundary temperature is independent of t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Similarly to Fang, Zi, Zhang [5, Section 3, Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='4], we have to show that the intergrals � Ω ϑ ∇xu : ∇x∂tu dx, � Ω ϑ ∇xu : ∇t x∂tu dx, and � Ω ϑ divxu divx∂tu dx can be rewritten in the form compatible with (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='19), meaning with the time derivatives replaced by material derivatives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Fortunately, this step can be carried out in the present setting using only the boundary condition u · n|∂Ω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Indeed we get � Ω ϑ ∇xu : ∇x∂tu dx = � Ω ϑ ∇xu : ∇x(Dtu) dx − � Ω ϑ ∇xu : ∇x(u · ∇xu) dx, where � Ω ϑ ∇xu : ∇x(u · ∇xu) dx = � Ω ϑ ∇xu : (∇xu · ∇xu) dx + 1 2 � Ω ϑ u · ∇x|∇xu|2 dx = � Ω ϑ ∇xu : (∇xu · ∇xu) dx − 1 2 � Ω |∇xu|2 ∇xϑ · u dx − 1 2 � Ω |∇xu|2 ϑdivxu dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Similarly, � Ω ϑ ∇xu : ∇t x∂tu dx = � Ω ϑ ∇xu : ∇t x(Dtu) dx − � Ω ϑ ∇xu : ∇t x(u · ∇xu) dx, where � Ω ϑ ∇xu : ∇t x(u · ∇xu) dx = � Ω ϑ ∇xu : (∇t xu · ∇t xu) dx + 1 2 � Ω ϑ u · ∇x(∇xu : ∇t xu) dx = � Ω ϑ ∇xu : (∇t xu · ∇t xu) dx − 1 2 � Ω (∇xu : ∇t xu) ∇xϑ · u dx − 1 2 � Ω (∇xu : ∇t xu) ϑdivxu dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Finally, � Ω ϑ divxu divx∂tu dx = � Ω ϑ divxu divxDtu dx − � Ω ϑ divxu divx(u · ∇xu) dx, where � Ω ϑ divxu divx(u · ∇xu) dx 13 = � Ω ϑ divxu (∇xu : ∇t xu) dx + 1 2 � Ω ϑu · ∇x|divxu|2 dx = � Ω ϑ divxu (∇xu : ∇t xu) dx − 1 2 � Ω |divxu|2 ∇xϑ · u dx − 1 2 � Ω |divxu|2 ϑdivxu dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' We conclude, using (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='7), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='19), and (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='27), � Ω |∇xϑ|2(τ, ·) dx + � τ 0 � Ω ̺|Dtϑ|2 dx dt ≤ c(T, D0, ̺, ϑ, u) � 1 + � τ 0 � Ω |∇xu|4 dx dt � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='28) Next, by virtue of the decomposition u = v + w and the bound (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='24), � Ω |∇xu|4 dx <∼ � Ω |∇xv|4 dx + � Ω |∇xw|4 dx ≤ c(T, D0, ̺, ϑ, u) � 1 + � Ω |∇xw|4 dx � , (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='29) and, similarly, ∥w∥L∞(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) ≤ ∥u∥L∞(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) + ∥v∥L∞(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) ≤ c(T, D0, ̺, ϑ, u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='30) Recalling the Gagliardo–Nirenberg interpolation inequality in the form ∥∇xU∥2 L4(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) ≤ ∥U∥L∞(Ω)∥∆xU∥L2(Ω) whenever U|∂Ω = 0, (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='31) we may use (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='29), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='30) to rewrite (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='28) in the form � Ω |∇xϑ|2(τ, ·) dx + � τ 0 � Ω ̺|Dtϑ|2 dx dt ≤ c(T, D0, ̺, ϑ, u) � 1 + � τ 0 � Ω |∇xϑ|2 dx dt + � τ 0 ∥w∥2 W 2,2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) dt � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='32) Finally, we use the elliptic estimates (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='26) to conclude � Ω |∇xϑ|2(τ, ·) dx + � τ 0 � Ω ̺|Dtϑ|2 dx dt ≤ c(T, D0, ̺, ϑ, u) � 1 + � τ 0 � Ω � |∇xϑ|2 + |∇xu|2� dx dt + � τ 0 ∥√̺∂tu∥2 L2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) dt � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='33) Summing up (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='7), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='19), and (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='33) we may apply Gronwall’s lemma to obtain the following bounds: sup t∈[0,T) ∥u(t, ·)∥W 1,2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) ≤ c(T, D0, ̺, ϑ, u), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='34) sup t∈[0,T) ∥√̺Dtu(t, ·)∥L2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) ≤ c(T, D0, ̺, ϑ, u), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='35) 14 sup t∈[0,T) ∥ϑ(t, ·)∥W 1,2(Ω) ≤ c(T, D0, ̺, ϑ, u), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='36) � T 0 � Ω |∇xDtu|2 dx dt ≤ c(T, D0, ̺, ϑ, u), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='37) � T 0 � Ω ̺|Dtϑ|2 dx dt ≤ c(T, D0, ̺, ϑ, u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='38) Moreover, it follows from (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='24), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='31), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='35) sup t∈[0,T) ∥∇xu(t, ·)∥L4(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3×3) ≤ c(T, D0, ̺, ϑ, u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='39) In addition, (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='38), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='39) and the standard parabolic estimates applied to the internal energy balance (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3) yield � T 0 ∥ϑ∥2 W 2,2(Ω) dt ≤ c(T, D0, ̺, ϑ, u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='40) 5 Second energy bound It follows from (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='26), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='35) that sup t∈[0,T) ∥w(t, ·)∥W 2,2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) ≤ c(T, D0, ̺, ϑ, u);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1) whence, by virtue of (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='24) and Sobolev embedding W 1,2(Ω) ֒→ L6(Ω), sup t∈[0,T) ∥∇xu(t, ·)∥2 L6(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3×3) ≤ c(T, D0, ̺, ϑ, u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2) Moreover, as a consequence of (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='37), Dtu is bounded in L2(L6), which, combined with (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2), gives rise to � T 0 ∥∂tu∥2 L6(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) dt ≤ c(T, D0, ̺, ϑ, u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3) Finally, going back to (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='22) we conclude � T 0 ∥w∥2 W 2,6(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) dt ≤ c(T, D0, ̺, ϑ, u), (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='4) and � T 0 ∥u∥2 W 1,q(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) dt ≤ c(T, D0, ̺, ϑ, u, q) for any 1 ≤ q < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='5) 15 6 Estimates of the derivatives of the density Using (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='4), (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='5), we may proceed as in [19, Section 5] to deduce the bounds supt∈[0,T) � ∥∂t̺(t, ·)∥L6(Ω) + ∥̺(t, ·)∥W 1,6(Ω) � ≤ c(T, D0, ̺, ϑ, u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1) Revisiting the momentum equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2) we use (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1) together with the other bounds established above to obtain � T 0 ∥u∥2 W 2,6(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) dt ≤ c(T, D0, ̺, ϑ, u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1 Positivity of the density and temperature It follows from (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2) that divxu is bounded in L1(0, T;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' L∞(Ω)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Thus the equation of continuity (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1) yields a positive lower bound on the density inf (t,x)∈[0,T)×Ω ̺(t, x) ≥ ̺ > 0, (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3) where the lower bound depends on the data as well as on the length T of the time interval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Similarly, rewriting the internal energy balance equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3) in the form cv (∂tϑ + u · ∇xϑ) − κ ̺∆xϑ = 1 ̺S : Dxu − ϑdivxu (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='4) we may apply the standard parabolic maximum/minimum principle to deduce inf (t,x)∈[0,T)×Ω ϑ(t, x) ≥ ϑ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='5) 7 Parabolic regularity for the heat equation We rewrite the parabolic equation (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='4) in terms of Θ = ϑ − ϑB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Recalling ∆xϑB = 0 we get cv (∂tΘ + u · ∇xϑ) − κ ̺∆xΘ = 1 ̺S : Dxu − ϑdivxu (7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1) with the homogeneous Dirichlet boundary conditions Θ|∂Ω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2) Now, we can apply all arguments of [10, Sections 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='6, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='7] to Θ obtaining the bounds ∥ϑ∥Cα([0,T]×Ω) ≤ c(T, D0, ̺, ϑ, u) for some α > 0, (7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3) ∥ϑ∥Lp(0,T;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='W 2,3(Ω)) + ∥∂tϑ∥Lp(0,T;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='L3(Ω)) ≤ c(T, D0, ̺, ϑ, u) for all 1 ≤ p < ∞, (7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='4) together with ∥u∥Lp(0,T;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='W 2,6(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3)) + ∥∂tu∥Lp(0,T;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='L6(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3)) ≤ c(T, D0, ̺, ϑ, u) for any 1 ≤ p < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='5) 16 8 Final estimates The bounds (7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='5) imply, in particular, sup (t,x)∈[0,T)×Ω |∇xu(t, x)| ≤ c(T, D0, ̺, ϑ, u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1) Thus the desired higher order estimates can be obtained exactly as in [9, Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Indeed the arguments of [9, Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='6] are based on differentiating the equation (7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='1) with respect to time which gives rise to a parabolic problem for ∂tϑ with the homogeneous Dirichlet boundary conditions ∂tϑ|∂Ω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Indeed we get cv∂2 ttϑ + cvu · ∇x∂tϑ − κ ̺∆x∂tϑ =−cv∂tu · ∇xϑ − 1 ̺2∂t̺ (κ∆xϑ + S(Dxu) : Dxu) +2 ̺ S(Dxu) : Dx∂tu − ∂tϑ divxu − ϑ divx∂tu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' The estimates obtained in the previous sections imply that the right–hand side of the above equation is bounded in L2(0, T;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' L2(Ω)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Thus multiplying the equation on ∆x∂tϑ and performing the standard by parts integration, we get the desired estimates as in [9, Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' The remaining estimates are obtained exactly as in [9, Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='6] : sup t∈[0,T) ∥ϑ(t, ·)∥W 3,2(Ω) + sup t∈[0,T) ∥∂tϑ(t, ·)∥W 1,2(Ω) ≤ c(T, D0, ̺, ϑ, u), (8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2) � T 0 � ∥∂tϑ∥2 W 2,2(Ω) + ∥ϑ∥2 W 4,2(Ω) � dt ≤ c(T, D0, ̺, ϑ, u), (8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3) sup t∈[0,T) ∥u(t, ·)∥W 3,2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) + sup t∈[0,T) ∥∂tu(t, ·)∥W 1,2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) ≤ c(T, D0, ̺, ϑ, u), (8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='4) � T 0 � ∥∂tu∥2 W 2,2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) + ∥u∥2 W 4,2(Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='R3) � dt ≤ c(T, D0, ̺, ϑ, u), (8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='5) and sup t∈[0,T) ∥̺(t, ·)∥W 3,2(Ω) ≤ c(T, D0, ̺, ϑ, u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='6) We have completed the proof of Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' References [1] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Chaudhuri.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' On weak(measure valued)–strong uniqueness for Navier–Stokes–Fourier system with Dirichlet boundary condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Archive Preprint Series, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' arxiv preprint No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' 2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='00991.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [2] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Chaudhuri and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Feireisl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Navier-Stokes-Fourier system with Dirichlet boundary condi- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=', 101(12):4076–4094, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' 17 [3] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Davidson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Turbulence:An introduction for scientists and engineers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Oxford University Press, Oxford, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [4] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Fan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Jiang, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Ou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' A blow-up criterion for compressible viscous heat-conductive flows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Inst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Poincar´e Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Non Lin´eaire, 27(1):337–350, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [5] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Fang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Zi, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' A blow-up criterion for two dimensional compressible viscous heat-conductive flows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Nonlinear Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=', 75(6):3130–3141, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [6] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Feireisl, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Luk´aˇcov´a-Medviˇdov´a, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Mizerov´a, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' She.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Numerical analysis of com- pressible fluid flows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Springer-Verlag, Cham, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [7] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Feireisl and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Luk´aˇcov´a-Medviˇdov´a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Convergence of a stochastic collocation finite volume method for the compressible Navier–Stokes system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Archive Preprint Series, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' arxiv preprint No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='07435.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [8] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Feireisl and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Luk´aˇcov´a-Medviˇdov´a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Statistical solutions for the Navier–Stokes–Fourier system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Archive Preprint Series, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' arxiv preprint No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' 2212.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content='06784.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [9] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Feireisl, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Novotn´y, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' A regularity criterion for the weak solutions to the Navier-Stokes-Fourier system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Arch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Ration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Mech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=', 212(1):219–239, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [10] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Feireisl and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Conditional regularity of very weak solutions to the Navier-Stokes- Fourier system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' In Recent advances in partial differential equations and applications, vol- ume 666 of Contemp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=', pages 179–199.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Amer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=', Providence, RI, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [11] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Feireisl, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Wen, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Zhu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' On Nash’s conjecture for models of viscous, compressible, and heat conducting fluids.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' IM ASCR Prague, preprint No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' IM 2022 6, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [12] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Hoff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Global solutions of the Navier-Stokes equations for multidimensional compressible flow with discontinuous initial data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Differential Equations, 120:215–254, 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [13] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Huang and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Serrin-type blowup criterion for viscous, compressible, and heat conduct- ing Navier-Stokes and magnetohydrodynamic flows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=', 324(1):147–171, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [14] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Huang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Serrin-type blowup criterion for full compressible Navier-Stokes system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Arch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Ration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Mech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=', 207(1):303–316, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [15] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Jiu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Wang, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Ye.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Refined blow-up criteria for the full compressible Navier-Stokes equations involving temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Evol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Equ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=', 21(2):1895–1916, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [16] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Merle, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Rapha¨el, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Rodnianski, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Szeftel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' On the implosion of a compressible fluid I: smooth self-similar inviscid profiles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' of Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (2), 196(2):567–778, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' 18 [17] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Merle, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Rapha¨el, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Rodnianski, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Szeftel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' On the implosion of a compressible fluid II: singularity formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' of Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (2), 196(2):779–889, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [18] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Sun, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Wang, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' A Beale-Kato-Majda criterion for the 3-D compressible Navier-Stokes equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Pures Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=', 95(1):36–47, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [19] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Sun, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Wang, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' A Beale-Kato-Majda criterion for three dimensional com- pressible viscous heat-conductive flows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Arch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Ration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Mech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=', 201(2):727–742, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [20] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Valli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' A correction to the paper: “An existence theorem for compressible viscous fluids” [Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Pura Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4) 130 (1982), 197–213;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' MR 83h:35112].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Pura Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4), 132:399–400 (1983), 1982.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [21] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Valli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' An existence theorem for compressible viscous fluids.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Pura Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' (4), 130:197–213, 1982.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [22] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Valli and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Zajaczkowski.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Navier-Stokes equations for compressible fluids: Global exis- tence and qualitative properties of the solutions in the general case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=', 103:259–296, 1986.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [23] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Wen and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Zhu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Blow-up criterions of strong solutions to 3D compressible Navier-Stokes equations with vacuum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=', 248:534–572, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' [24] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Wen and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Zhu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Global solutions to the three-dimensional full compressible Navier- Stokes equations with vacuum at infinity in some classes of large data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=', 49(1):162–221, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} +page_content=' 19' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QNFRT4oBgHgl3EQfJje8/content/2301.13496v1.pdf'} diff --git a/QtFJT4oBgHgl3EQfJyy0/content/tmp_files/2301.11462v1.pdf.txt b/QtFJT4oBgHgl3EQfJyy0/content/tmp_files/2301.11462v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..b7e6a474678cc4c90f8f0e89ea741f2cd12e6020 --- /dev/null +++ b/QtFJT4oBgHgl3EQfJyy0/content/tmp_files/2301.11462v1.pdf.txt @@ -0,0 +1,2310 @@ +How poor is the stimulus? Evaluating hierarchical generalization in +neural networks trained on child-directed speech +Aditya Yedetore∗1, Tal Linzen2, Robert Frank3, R. Thomas McCoy∗4 +1Boston University, 2New York University, 3Yale University, 4Princeton University +yedetore@bu.edu, linzen@nyu.edu, robert.frank@yale.edu, +tom.mccoy@princeton.edu +Abstract +When acquiring syntax, children consistently +choose hierarchical rules over competing non- +hierarchical possibilities. +Is this preference +due to a learning bias for hierarchical struc- +ture, or due to more general biases that in- +teract with hierarchical cues in children’s lin- +guistic input? +We explore these possibili- +ties by training LSTMs and Transformers— +two types of neural networks without a hi- +erarchical bias—on data similar in quantity +and content to children’s linguistic input: text +from the CHILDES corpus. We then evaluate +what these models have learned about English +yes/no questions, a phenomenon for which hi- +erarchical structure is crucial. +We find that, +though they perform well at capturing the sur- +face statistics of child-directed speech (as mea- +sured by perplexity), both model types general- +ize in a way more consistent with an incorrect +linear rule than the correct hierarchical rule. +These results suggest that human-like general- +ization from text alone requires stronger biases +than the general sequence-processing biases of +standard neural network architectures. +1 +Introduction +Syntax is driven by hierarchical structure, yet we +typically encounter sentences as linear sequences +of words. How do children come to recognize the +hierarchical nature of the languages they acquire? +Some argue that humans must have a hierarchical +inductive bias—an innate predisposition for hierar- +chical structure (Chomsky, 1965, 1980). An alter- +native view (e.g., Lewis and Elman, 2001) is that +no such bias is necessary: there may be clear evi- +dence for hierarchical structure in children’s input, +so that children would choose hierarchical rules +even without a hierarchical bias. +∗ Work done while at Johns Hopkins University. +At first blush, recent work in natural language +processing (NLP) may seem to indicate that no hier- +archical bias is necessary. Neural networks trained +on naturally-occurring text perform impressively +on syntactic evaluations even though they have no +explicit syntactic structure built into them (e.g., Gu- +lordava et al., 2018; Wilcox et al., 2018; Warstadt +et al., 2020a). However, these results do not pro- +vide strong evidence about the learning biases re- +quired to learn language from the data available +to humans because these models receive very dif- +ferent training data than humans do (Warstadt and +Bowman, 2022). First, NLP models are typically +trained on far more data than children receive, so +models have more opportunities to encounter rare +syntactic structures (Linzen, 2020). Second, most +training sets in NLP are built from Internet text +(e.g., Wikipedia), which differs qualitatively from +the utterances that children typically hear; e.g., sen- +tences in Wikipedia are on average 25 words long +(Yasseri et al., 2012), compared to 5 words for +sentences in the North American English subset +of the CHILDES corpus of child-directed speech +(MacWhinney, 2000). +In this work, to evaluate if neural networks with- +out a hierarchical bias generalize like children do, +we train models on text1 comparable to the sen- +tences in children’s linguistic input: English data +from CHILDES. We then analyze what they have +learned about the relationship between declarative +sentences, such as (1a), and their corresponding +yes/no questions, such as (1b): +(1) +a. Those are your checkers. +b. Are those your checkers? +Crucially, nearly all naturally-occurring yes/no +questions are consistent with two rules: one based +1Section 6.5 discusses other input types (e.g., visual input). +arXiv:2301.11462v1 [cs.CL] 26 Jan 2023 + +on hierarchical structure (2), and one based on lin- +ear order (3):2,3 +(2) +HIERARCHICALQ: The auxiliary at the start +of a yes/no question corresponds to the main +auxiliary of the corresponding declarative. +(3) +LINEARQ: The auxiliary at the start of a +yes/no question corresponds to the first auxil- +iary of the corresponding declarative. +Despite the scarcity of evidence disambiguating +these rules, children reliably favor HIERARCHI- +CALQ (Crain and Nakayama, 1987), albeit with +occasional errors consistent with LINEARQ (Am- +bridge et al., 2008). Yes/no questions thus are a +prime candidate for an aspect of English syntax +for which human-like generalization requires a hi- +erarchical bias. We evaluate yes/no question per- +formance in LSTMs and Transformers, two neural- +network architectures that have no inherent hierar- +chical inductive bias (McCoy et al., 2020; Petty and +Frank, 2021). These architectures employ different +computational mechanisms, so consistent results +across both would indicate that our results are not +due to idiosyncrasies of one particular architecture. +To investigate if models generalize more con- +sistently with the hierarchical or linear rule, we +evaluate them on cases where the rules make dif- +ferent predictions, such as (4): under HIERARCHI- +CALQ, the question that corresponds to (4a) is (4b), +whereas under LINEARQ it is (4c). +(4) +a. The boy who has talked can read. +b. Can the boy who has talked +read? +c. *Has the boy who +talked can read? +We find that across several ways of framing the +learning task, models fail to learn HIERARCHI- +CALQ. Instead, they generalize in ways that de- +pend on linear order and on the identities of spe- +cific words. These results suggest that children’s +training data, if taken to be words alone, may not +contain enough hierarchical cues to encourage hier- +archical generalization in a learner without a hierar- +chical bias. Thus, explaining human acquisition of +syntax may require postulating that humans have +stronger inductive biases than those of LSTMs and +2In past work these rules have been framed as transforma- +tions named MOVE-FIRST and MOVE-MAIN (McCoy et al., +2020). We instead follow Berwick et al. (2011) and frame the +child’s knowledge as a relationship between sentences. +3Though these two rules are the most prominent in prior +literature, other rules are possible; see Section 5.2. +Transformers, or that information other than word +sequences plays a crucial role.4 +2 +Background +Though HIERARCHICALQ and LINEARQ often +make the same predictions, the evidence in chil- +dren’s input may still favor HIERARCHICALQ. +The most straightforward evidence would be ut- +terances that directly disambiguate the rules, such +as (4b). Pullum and Scholz (2002) show that disam- +biguating examples appear in the Wall Street Jour- +nal, in literature, and arguably in child-directed +speech, but direct evidence may still be too rare to +robustly support HIERARCHICALQ (Legate and +Yang, 2002). Nonetheless, children might con- +clude that yes/no questions obey HIERARCHI- +CALQ rather than LINEARQ based on indirect +evidence—evidence that other syntactic phenom- +ena are hierarchical (Mulligan et al., 2021). +To test if the cues favoring HIERARCHICALQ +render a hierarchical bias unnecessary, we study +how well non-hierarchically-biased models acquire +English yes/no questions. Several prior papers have +used this approach, but their training data differed +from children’s input in important ways: some used +synthetic datasets (Lewis and Elman, 2001; Frank +and Mathis, 2007; Clark and Eyraud, 2007; McCoy +et al., 2020), others used massive Internet corpora +(Lin et al., 2019; Warstadt and Bowman, 2020), +and those that used child-directed speech simpli- +fied the data by replacing each word with its part +of speech (Perfors et al., 2011; Bod et al., 2012). +We used training data closer to children’s input, +namely sentences from CHILDES with word iden- +tities preserved, rather than being converted to parts +of speech. Two other recent works have also trained +neural networks on CHILDES data (Pannitto and +Herbelot, 2020; Huebner et al., 2021), but neither +investigated yes/no questions. +One particularly important reason for training +models on CHILDES is that, in prior work, differ- +ent types of training data have yielded diverging +results: Recent models trained on synthetic data +failed to properly acquire yes/no questions (McCoy +et al., 2020; Petty and Frank, 2021), whereas ones +trained on large Internet corpora scored well on +evaluations of yes/no questions (Lin et al., 2019; +Warstadt and Bowman, 2020). Given these differ- +ing results, it is not clear from past work how these +4Our datasets and models will be uploaded online soon to +facilitate further research. + +models would generalize when faced with the type +of data that children receive. +3 +Overview of Experimental Setup +We evaluated models on yes/no questions in two +ways. First, we used relative acceptability judg- +ments (Experiment 1): We trained neural networks +on the task of language modeling (predicting the +next word at every point in the sentence) and evalu- +ated whether they assigned a higher probability to +sentences consistent with LINEARQ or HIERAR- +CHICALQ. Our second approach was based on text +generation (Experiment 2): We trained networks +to take in a declarative sentence and output the +corresponding question, and tested whether they +generalized in a way more consistent with LIN- +EARQ or HIERARCHICALQ. Under both framings, +we trained models on data from CHILDES and +evaluated them on targeted datasets constructed to +differentiate LINEARQ and HIERARCHICALQ. +4 +Experiment 1: Relative Acceptability +4.1 +Dataset +To train models on data as similar as possible to +the sentences children receive, we extracted data +from CHILDES (MacWhinney, 2000). We used +the North American English portion. We wished +to replicate children’s input, so we excluded the +children’s own utterances, leaving a 9.6-million- +word corpus. We allocated 90% of the data to +training, 5% to validation, and 5% to testing. We +replaced words that appeared two or fewer times in +the training set with , giving a replacement +rate of 0.3%. See Appendix A for more details. +4.2 +Task: Next-Word Prediction +We trained models on next-word prediction, also +known as language modeling. We chose this task +for two reasons. First, it is clear empirically that +next-word prediction can teach neural networks a +substantial amount about syntax (e.g., Hu et al., +2020). Second, it is plausible that humans per- +form some version of next-word prediction during +sentence processing (Altmann and Kamide, 1999; +Hale, 2001; Levy, 2008; Kutas et al., 2011) and +that such prediction may play a role in acquisition +(Elman, 1991). Thus, while next-word prediction +is certainly not the only goal of human language +learners, we view this task as a reasonable first step +in emulating human language acquisition. +4.3 +Architectures +We used two neural network architectures: LSTMs +(Hochreiter and Schmidhuber, 1997) and Trans- +formers (Vaswani et al., 2017). We chose these +models for two reasons. First, they have been the +most successful architectures in NLP. Thus, we +have reason to believe that, of the types of low-bias +models invented, these two are the ones most likely +to discover linguistic regularities in our CHILDES +training data. Second, the two architectures pro- +cess sequences very differently (via recurrence vs. +via attention). Thus, if both generalize similarly, +we would have evidence that what was learned is +strongly evidenced in the data, rather than due to a +quirk of one particular architecture. +For our LSTMs, we used 2 layers, a hidden and +embedding size of 800, a batch size of 20, a dropout +rate of 0.4, and a learning rate of 10. For our Trans- +formers, the corresponding values were 4, 800, 10, +0.2, and 5, and we used 4 attention heads. We chose +these values based on a hyperparameter search de- +scribed in Appendix B. All following results are av- +eraged across 10 runs with different random seeds. +4.4 +Results: Language Model Quality +Before testing models on questions, we used per- +plexity to evaluate how well they captured the basic +structure of their training domain. As a baseline, +we used a 5-gram model with Kneser-Ney smooth- +ing (Kneser and Ney, 1995) trained with KenLM +(Heafield, 2011). The test set perplexity for the +5-gram baseline was 24.37, while the average test +set perplexity for the LSTMs and Transformers +was 20.05 and 19.69, respectively. For perplexity, +lower is better. Thus, both neural network types +outperformed the strong baseline of a smoothed +5-gram model, showing that they performed well +at capturing the basic statistics of their training +domain.5 +4.5 +General Syntactic Evaluation +As an additional way to check the validity of our +setup, we evaluated our models on the Zorro dataset +(Huebner et al., 2021), which is based on BLiMP +(Warstadt et al., 2020a). Zorro contains 24 evalu- +ations, each of which targets one syntactic phe- +nomenon (e.g., subject-verb agreement) and in- +volves sentence pairs for which one sentence is +grammatical, and the other is minimally different +5For an intuitive illustration of our model quality, see the +sample text generated by them in Appendix H. + +but ungrammatical (e.g., by violating subject verb +agreement). A model is said to get a sentence +pair correct if it assigns a higher probability to the +grammatical sentence than the ungrammatical one. +Huebner et al. (2021) showed that Transformers +trained on CHILDES data can perform well on +many of the Zorro categories, so if our setup is +sound, our own models should also perform well +on Zorro. +See Appendix D for full results. For each syntac- +tic phenomenon, most model re-runs scored above +0.9, though at least one scored near the chance level +of 0.5. For each re-run of each architecture there +is at least one phenomenon for which the model +scores over 0.97, and many models score 1.00 on +some phenomena. Thus, all models score well on +at least some syntactic evaluations, attaining results +comparable to those of Huebner et al. (2021) and +providing additional support for the validity of our +setup. We now test whether these models have also +successfully learned the specific phenomenon that +we focus on, yes/no questions—a phenomenon not +included in the Zorro dataset. +4.6 +Yes/No Questions +Evaluation Dataset: Forced-Choice Acceptabil- +ity Judgments +As a first way to test whether our +models have learned HIERARCHICALQ, we eval- +uate whether they assign higher probabilities to +sentences consistent with HIERARCHICALQ than +to minimally different sentences that are ungram- +matical. For this purpose, we create an evaluation +dataset containing groups of 6 questions, each cre- +ated by starting with a declarative sentence, such +as (5), and then deleting the first, main, or neither +auxiliary, and inserting the first or main auxiliary +at the front of the sentence.6 For instance, in (6b), +the first auxiliary has been preposed, and the main +auxiliary has been deleted. +(5) +The dog who has seen a boy did try. +(6) +a. Has the dog who seen a boy did try? +b. Has the dog who has seen a boy try? +c. Has the dog who has seen a boy did try ? +d. Did the dog who seen a boy did try? +e. Did the dog who has seen a boy try? +f. Did the dog who has seen a boy did try? +6It would be possible to also use a ‘prepose other’ category, +where an auxiliary not in the input is inserted (McCoy et al., +2018). We excluded this category because using it would raise +complications about which ‘other’ auxiliary to choose. +Within each group, we evaluate which question +the model assigned the highest probability to. If a +model has correctly learned HIERARCHICALQ, it +should assign the highest probability to the question +consistent with this rule, such as (6e). +Several past papers about yes/no questions have +used the same general approach (Lewis and El- +man, 2001; Reali and Christiansen, 2005). How- +ever, these papers considered only pairs of sen- +tences, whereas we consider groups of 6 to allow +for a wider range of possible generalizations that a +model might have learned. +To generate the declaratives from which we +formed groups of 6 questions, we used the context- +free grammar (CFG) in Appendix F, which has a vo- +cabulary selected from the most common words in +CHILDES. Each declarative generated by the CFG +(e.g., (5)) contains two auxiliary verbs: one before +the sentence’s main verb and one inside a relative +clause modifying the subject. One potential prob- +lem is that some questions are consistent with both +HIERARCHICALQ and LINEARQ. For instance, +(7a) can be formed from (7b) with the HIERARCHI- +CALQ-consistent steps PREPOSE-MAIN,DELETE- +MAIN, or from (7c) with the LINEARQ-consistent +steps PREPOSE-FIRST,DELETE-MAIN. +(7) +a. Did the boy who did see the person laugh? +b. The boy who did see the person did laugh. +c. The boy who did see the person can laugh. +To avoid this problem, we required that the aux- +iliary before the main verb must select for a dif- +ferent verb inflection than the one in the relative +clause. For instance in (5), did selects for the verb’s +bare form, while has selects for the past participle +form. Thus, the auxiliary at the start of the question +could only correspond to whichever auxiliary in the +declarative has the same selectional properties.7 +Results: Relative Question Acceptability +For +each sentence group, we used per-word perplex- +ity to see which of the 6 candidates the models +scored most highly.8 For both LSTMs and Trans- +formers, the correct category (PREPOSE MAIN, +DELETE MAIN) was the second-rarest choice, and +7A model could succeed on this dataset with a rule that +relates the auxiliary at the start of a question with the last +auxiliary in the declarative form. Since our models fail on this +dataset, this consideration is not relevant here. +8We also explored evaluation of the models with a more +complex measure called SLOR where we additionally nor- +malized scores by word frequency (Pauls and Klein, 2012). +Both metrics produced qualitatively similar results, so we only +report the simpler metric here. See Appendix C.1. + +Prepose First +Prepose Main +Delete First +Delete Main +Delete none +LSTM +Transformer +LSTM +Transformer +0.0 +0.5 +1.0 +0.0 +0.5 +1.0 +0.0 +0.5 +1.0 +Preference for question type +Declarative sentence: The person who has seen this boy did try. +Has the person who seen +this boy did try? +Did the person who seen +this boy did try? +Has the person who has +seen this boy try? +Did the person who has +seen this boy try? +Has the person who has +seen this boy did try? +Did the person who has +seen this boy did try? +Figure 1: The question types that models prefer when +offered a choice between 6 questions. These 6 ques- +tions are formed by modifying a declarative with a rel- +ative clause on the subject according to ‘prepose’ and +‘delete’ rules. The correct category is PREPOSE MAIN, +DELETE MAIN. Within each architecture, the propor- +tions across all 6 question types necessarily sum to 1. +Each bar shows the average across 10 model re-runs, +with single-standard-deviation error bars. +the most frequent preference was for PREPOSE +FIRST, DELETE MAIN, a category that is only par- +tially correct because it references linear order in +addition to hierarchical structure. (Figure 1). +Thus, neither model displays preferences con- +sistent with the correct, fully-hierarchical gener- +alization. The two model types showed similar +scores, which may mean that these results are +largely driven by the statistics of the training data +that both models share, rather than the models’ dif- +fering inductive biases. +One of the incorrect categories—PREPOSE +MAIN, DELETE NONE, such as (6f)—only re- +quires reference to hierarchical structure, so it +could be said to capture the hierarchical nature of +yes/no questions. Nonetheless, this category was +also relatively rare: combining the two fully hier- +archical possibilities (PREPOSE MAIN, DELETE +MAIN and PREPOSE MAIN, DELETE NONE) ac- +counts for only 26% of LSTM preferences and +27% of Transformer preferences, meaning that both +models over 70% of the time favored a sentence +generated at least partially based on linear order. +There are two likely reasons for why our models +performed so poorly on yes-no questions when they +performed well on many of the phenomena in the +Zorro dataset (Section 4.5). First, yes/no questions +may simply be harder to learn than the other phe- +nomena; indeed, yes/no questions are often singled +out as being likely to pose difficulties for a general- +purpose learner (Section 1). Alternatively, it might +be that the six-way evaluation we used for yes/no +questions is stricter than the binary judgments used +for the Zorro dataset. +5 +Experiment 2: Question Formation +The previous experiment was designed to operate +entirely in the next-word-prediction paradigm, mo- +tivated by arguments from past literature about +the strength and relative ecological validity of +next-word-prediction as a training objective (see +Section 4.2). +However, one of this setup’s +shortcomings is that HIERARCHICALQ describes +correspondences between questions and declara- +tives, but Experiment 1 focused on questions alone, +with no consideration of declaratives. +In this second experiment, to better capture that +HIERARCHICALQ is defined over sentence pairs, +we trained models on a sentence-pair task: trans- +forming a declarative into a question (McCoy et al., +2020). For instance, given the child did learn the +model must produce did the child learn ? +We evaluated models in two ways. First, we +checked if the models’ predictions fully matched +the correct questions. This full-sentence evaluation +is demanding, and models might fail this evalua- +tion for reasons unrelated to our core hypotheses. +For instance, given the child did learn the model +might produce did the baby learn, which would be +marked as incorrect, even though this lexical error +is not relevant to HIERARCHICALQ. +As a metric that is less demanding and that also +more directly targets HIERARCHICALQ, we mea- +sured if the first word of the output question corre- +sponded to the first or main auxiliary of the input. +Critically, LINEARQ and HIERARCHICALQ make +different predictions for the first word of a question +so long as the two auxiliaries are distinct: see (4). +Because this framing lets the model freely generate +its output (instead of choosing one option from a +pre-specified set), we allow for the possibility that +the rule learned by models may not be identical to +any of our manually-generated hypotheses. +Solely training models to perform this transfor- +mation involves the implicit assumption that, when +children acquire English yes/no questions, the only +evidence they leverage is English yes/no questions. +However, other types of sentences may also pro- +vide useful evidence (Pearl and Mis, 2016): e.g., +wh-questions also illustrate subject-auxiliary in- + +version (Pullum and Scholz, 2002), while, more +generally, many types of sentences could provide +evidence that the syntax as a whole is hierarchical +(Perfors et al., 2011). To explore this possibility, +we compared a condition in which models were +only trained to perform question formation (the +QUESTION FORMATION condition) to another in +which models were first pre-trained on next-word +prediction with the exact same setup as in Experi- +ment 1 before being further trained to perform ques- +tion formation (the NEXT-WORD PREDICTION + +QUESTION FORMATION condition). +5.1 +Dataset +Training Set +Our question formation dataset con- +sisted of the yes/no questions in the CHILDES +Treebank (Pearl and Sprouse, 2013a,b), a parsed +subset of CHILDES containing 189,359 sentences. +We used these parses to extract all yes/no ques- +tions from the CHILDES Treebank and derive their +corresponding declarative forms. The resulting +declarative was concatenated with the question. An +example declarative/question pair is: +(8) +you can spell your name . +can you +spell your name ? +The training set consisted of 10,870 declara- +tive/question pairs, the validation set 1,360 pairs, +and the test set 1,358 pairs (we will call this test +set the randomly-partitioned test set to distinguish +it from two other evaluation sets discussed below). +We trained models to perform next-word prediction +on such concatenated sentence pairs. +The first-word accuracy of the trained model +was then computed based on the model’s predic- +tion for the word after the period in each test exam- +ple, while the full-sentence accuracy was computed +based on its predictions for all tokens after the pe- +riod. All questions in the randomly-partitioned test +set were withheld from both the question-formation +training set and the next-word-prediction training +set. Thus, models had not seen these test examples +in their training, even in the NEXT-WORD PRE- +DICTION + QUESTION FORMATION condition in +which they were trained on both tasks. +Evaluation Sets +In addition to the randomly- +partitioned test set, we used CFGs to generate two +targeted evaluation sets. As in Experiment 1, we se- +lected the CFGs’ vocabulary from common words +in our CHILDES data. In sentences generated from +the first CFG, the sentence’s first auxiliary was also +its main auxiliary, so LINEARQ and HIERARCHI- +CALQ make the same predictions. (8) exemplifies +the type of declarative-question pair in this dataset. +We call this dataset FIRST-AUX = MAIN-AUX. For +sentences generated by the second CFG, the main +auxiliary was the second auxiliary in the sentence; +thus, these examples disambiguate LINEARQ and +HIERARCHICALQ. Example (9) is a declarative- +question pair from this evaluation set. +(9) a boy who is playing can try . +can a +boy who is playing try ? +We call this dataset FIRST-AUX ̸= MAIN-AUX. +See Appendix F for the CFGs used. We sampled +10,000 declarative sentences from these grammars +and transformed them into questions according to +HIERARCHICALQ to create our evaluation sets. +5.2 +Results +Randomly-Partitioned Test Set +The LSTMs +and Transformers in the QUESTION FORMA- +TION condition performed well on the randomly- +partitioned test set, with a full-question accuracy +of 0.68 ± 0.014 and 0.87 ± 0.005 (averaged across +10 reruns with margins indicating one standard de- +viation). The models in the NEXT-WORD PRE- +DICTION + QUESTION FORMATION condition per- +formed similarly well, with a full-question accu- +racy of 0.66 ± 0.008 for the LSTMs and 0.93 ± +0.004 for the Transformers. For both model types, +the first-word accuracy for the question was nearly +1.00 across re-runs. We suspect that Transform- +ers have a stronger full-question accuracy because +producing the question requires copying all words +from the declarative (but in a different order). Copy- +ing is likely easy for Transformers because they can +attend to specific words in the prior context, while +our LSTMs must compress the entire context into a +fixed-size vector, which may degrade the individual +word representations. Because both model types +achieved near-perfect performance on the crucial +first-word accuracy metric, we conclude that our +models have successfully learned how to handle +the types of declarative/question pairs that we ex- +tracted from the CHILDES Treebank. +Targeted Evaluation Sets +On our two targeted +evaluation sets, models almost never produced the +complete question correctly. Turning to the more +lenient measure of first-word accuracy, for exam- +ples on which LINEARQ and HIERARCHICALQ +predict the same first output word (FIRST-AUX = +MAIN-AUX), the Transformer trained only on ques- +tion formation performed strongly, while the Trans- + +LSTM +Transformer +First-Aux = Main-Aux +First-Aux ≠ Main-Aux +HierarchicalQ +& LinearQ +HierarchicalQ +Only +LinearQ +Only +HierarchicalQ +& LinearQ +HierarchicalQ +Only +LinearQ +Only +0.00 +0.25 +0.50 +0.75 +1.00 +0.00 +0.25 +0.50 +0.75 +1.00 +Consistency with rule(s), +based on first word of question +Condition +Question Formation +Next-Word Prediction ++ Question Formation +Figure 2: Proportion of model-produced questions that +were consistent with the linear rule LINEARQ and/or +the hierarchical rule HIERARCHICALQ. In the FIRST- +AUX = MAIN-AUX dataset, the first auxiliary is the +main auxiliary, so both LINEARQ and HIERARCHI- +CALQ produce the correct question string. The FIRST- +AUX ̸= MAIN-AUX dataset disambiguates the two +rules. Each bar shows the average across 10 model re- +runs, with error bars showing one standard deviation. +former trained on both tasks, and both LSTMs, +performed reasonably well (Figure 2; note mod- +els could choose any word in their vocabulary to +begin the output, so chance performance is near +0.00). For the crucial cases that disambiguate the +two rules (FIRST-AUX ̸= MAIN-AUX), both mod- +els in both conditions performed more consistently +with LINEARQ than HIERARCHICALQ. Training +on next-word prediction before question formation +had inconsistent effects: it modestly increased the +likelihood of hierarchical generalization in LSTMs, +yet it decreased that likelihood in Transformers. +Lexical Specificity +In Appendix G, we further +break down the FIRST-AUX ̸= MAIN-AUX results +based the auxiliaries’ identity. The generalization +pattern varied considerably across auxiliary pairs. +For some auxiliary pairs, the auxiliary chosen to +begin the question was usually neither auxiliary +in the input (Figure 3, left facet). For other pairs, +models usually chose the first auxiliary, regardless +of lexical identity (Figure 3, middle facet). Finally, +for some pairs, the auxiliary chosen was usually +the same one, regardless of whether it was the first +or main auxiliary (Figure 3, right facet). +Generalization based on lexical identity is rarely +considered in past discussions of English yes/no +question acquisition. Of the papers on this phe- +nomenon (see Clark and Lappin (2010), Lasnik +and Lidz (2017), and Pearl (2021) for overviews), +the only one to our knowledge that discusses lexi- +have and has +can and do +have and did +Move−first−aux +Move−main−aux +Move−have +Move−has +Move−first−aux +Move−main−aux +Move−can +Move−do +Move−first−aux +Move−main−aux +Move−have +Move−did +0.0 +0.5 +1.0 +First word behavior +consistent with rule +Comparison +First−vs−main +Aux−vs−Aux +Figure 3: Lexical specificity in model behavior. Each +facet considers only the evaluation examples contain- +ing the two auxiliaries in the facet heading; e.g., the +can and do facet includes, for example, the inputs the +children who can play do learn and the children who +do play can learn. The bars show the proportion of +model predictions for the first word of the output that +are consistent with four potential movement rules, aver- +aged across 10 model re-runs and with error bars show- +ing one standard deviation above and below the mean. +This plot only shows an illustrative subset of auxiliary +pairs for one model type (Transformers in the NEXT- +WORD PREDICTION + QUESTION FORMATION con- +dition); see Appendix G for the full results. +cal specificity is Frank and Mathis (2007), which +studied models trained on synthetic data. Our re- +sults highlight the importance of testing for a broad +range of generalizations: Lexically-specific hy- +potheses appear attractive for our low-bias learners, +so an account of what biases can yield human-like +learning should rule out these lexically-specific hy- +potheses along with linear ones. +6 +Discussion +We have found that, when trained on child-directed +speech, two types of standard neural networks per- +formed reasonably well at capturing the statistical +properties of the dataset, yet their handling of En- +glish yes/no questions was more consistent with +a linear rule LINEARQ than the correct hierarchi- +cal rule HIERARCHICALQ. These results support +the hypothesis that a learner requires a hierarchical +bias to consistently learn hierarchical rules when +learning from the linguistic data children receive. +6.1 +Takeaways for LSTMs and Transformers +When trained on massive corpora, LSTMs and +Transformers perform impressively on some syn- +tactic evaluations. Based on such results, it is tempt- +ing to conclude that the general-purpose biases of +these architectures suffice to yield human-like syn- + +tax acquisition. Our results caution against this +interpretation: When we trained the same architec- +tures on data more similar to children’s input, they +failed to learn the structure of English yes/no ques- +tions. Thus, at least when learning from text alone, +LSTMs and Transformers do not display human- +like language learning—they do not generalize as +humans do from the data that humans receive. +6.2 +Takeaways for the Poverty of the +Stimulus Debate +Below we specify four possible positions in the +poverty-of-the-stimulus debate about the adequacy +of children’s input for inducing hierarchical rules in +low-bias learners, arranged from assuming the most +limited to the most expansive innate component: +(10) Any inductive biases: Any learner trained on +CHILDES will generalize like humans do. +(11) Any +inductive +biases +that +enable +in- +distribution learning: Any learner that cap- +tures the statistical patterns of the training dis- +tribution will generalize to HIERARCHICALQ. +(12) Some non-hierarchical inductive biases: +Some general-purpose learners will generalize +as humans do, but others will not. +(13) Only a hierarchical inductive bias: +No +general-purpose learners will generalize as +humans do: hierarchical biases are necessary. +Position (10) is clearly false: many learners can- +not learn certain aspects of syntax, no matter their +training data (e.g., bigram models cannot capture +long-distance dependencies). Our work shows that +position (11) is also false: Though our models per- +formed well on the in-distribution test sets of Exper- +iments 1 and 2, they did not generalize in human- +like ways. This leaves positions (12) and (13), +which our existing results cannot differentiate. It is +possible that only learners with hierarchical induc- +tive biases can demonstrate human-like language +learning (position (13)), but also that some learners +without this bias can succeed (position (12))—just +not the learners we tested. For further discussion +of how computational modeling can bear on learn- +ability arguments, see Wilcox et al. (2021). +One potential solution supporting position (12) +would be that learners leverage the hierarchical +structure of some syntactic phenomenon to help +conclude that other, impoverished phenomena are +hierarchical (Perfors et al., 2011; Mulligan et al., +2021). However, our results from Experiment 2 +show that giving learners access to a wider range +of phenomena does not automatically improve hi- +erarchical generalization: Models’ performance on +question formation was not substantially improved +(and in some cases was even harmed) when they +were trained not just on question formation but also +on next-word prediction on the entire CHILDES +corpus. Thus, although training on text that con- +tains many linguistic phenomena can give mod- +els a hierarchical inductive bias when the training +is done over large Internet corpora (Warstadt and +Bowman, 2020; Mueller et al., 2022), our results +provide evidence that this conclusion does not ex- +tend to models trained on child-directed speech. +Though both (12) and (13) remain as possibil- +ities, we believe that our results more strongly +support (13). Of all currently available general- +purpose learners, LSTMs and Transformers are the +best at modeling the probabilistic structure of lin- +guistic data. Therefore, if child-directed speech +contains clear evidence for the hierarchical nature +of yes/no questions—evidence so clear that at least +some general-purpose learners could recognize it— +it is likely that LSTMs and Transformers would +be among the set of general-purpose learners that +could use this evidence to make hierarchical gener- +alizations in our experiments. The fact that these +architectures instead predominantly favored linear +generalizations therefore supports position (13). +6.3 +How to test for HIERARCHICALQ +We have argued that an ideal simulation of the +acquisition of English yes/no questions would have +the following properties: +(14) The training data should be similar to chil- +dren’s linguistic input. +(15) The training task should be ecologically valid. +(16) The evaluation method should focus on corre- +spondences between pairs of sentences rather +than the acceptability of individual sentences. +Property (14) motivated our use of text from +CHILDES as the training data. We are not aware +of a single experimental setup that fully satisfies +both Property (15) and Property (16), so we instead +used two experiments, each one focusing on one +property at the cost of satisfying the other one less +well. Experiment 1 works entirely in the context +of the relatively ecologically valid task of next- +word prediction, motivated by Property (15), but its + +evaluation is only based on the acceptability of in- +dividual sentences, failing to satisfy Property (16). +Experiment 2 fully satisfies Property (16) by using +an evaluation based on sentence pairs, at the cost of +including a less ecologically-valid training compo- +nent based on sentence transformations. Both ex- +periments yielded qualitatively similar conclusions +(failure of models to learn HIERARCHICALQ). +6.4 +Quantity of Training Data +The size of our training set was plausibly within +the range from which children can acquire HIER- +ARCHICALQ. Crain and Nakayama (1987) found +that children between ages 3 and 5 behaved much +more consistently with HIERARCHICALQ than +LINEARQ. Though these children made many er- +rors, their errors were usually compatible with a +hierarchical rule (e.g., PREPOSE MAIN, DELETE +NONE errors: see Section 4.6). By age 3, Ameri- +can children receive approximately 10 to 33 mil- +lion words of input (Hart and Risley, 1995), and +the 8.5 million words of our training set is close +to the lower end of that range. Thus, it is reason- +able to suppose that a learner that generalizes as +children do would favor HIERARCHICALQ after +being trained on our training set. Our models, in +contrast, regularly preferred sentences generated +in ways based on linear order (Figures 1 and 2), a +category of error that is very rare in children (Crain +and Nakayama, 1987; Ambridge et al., 2008). +In order to give our models the strongest chance +of generalizing correctly, it would have been ideal +to provide a quantity of data closer to 33 million +words, the high end of Hart and Risley’s range. Our +data source did not contain enough text to make this +possible, but future work could investigate ways to +augment the data using other sources. +6.5 +Type of Training Data +Our training set was both qualitatively and quanti- +tatively closer to children’s input than the massive +Internet corpora standardly used to train models in +NLP (Linzen, 2020). This difference is important: +Lin et al. (2019), Warstadt and Bowman (2020), +and Mueller et al. (2022) all found evidence that +models trained on large Internet corpora performed +well on yes/no questions evaluations, whereas our +models trained on CHILDES performed poorly— +though we cannot be certain the differences in re- +sults are solely due to differences in the training +data, since these prior papers used different model +architectures, training tasks, and evaluation setups. +Though our training data are more similar to +children’s input than massive Internet corpora are, +differences remain. Our experiments omit several +aspects of a child’s experience that might help them +acquire syntax, such as prosody (Morgan and De- +muth, 1996), visual information (Shi et al., 2019), +and meaning (Fitz and Chang, 2017; Abend et al., +2017), all of which might correlate with syntac- +tic structure and thus provide cues to the correct +hierarchical generalization. On the other hand, +our dataset might present an easier learning sce- +nario than children are faced with, because chil- +dren must learn to segment the speech stream into +words (Lakhotia et al., 2021), while our models do +not need to. Further, though real-world grounding +could provide helpful information, learners might +struggle to leverage this information due to diffi- +culty determining what is being discussed in the +physical world (Gleitman et al., 2005). +7 +Conclusion +In this work, we trained two types of neural net- +works (LSTMs and Transformers) on sentences of +the types available to children and then analyzed +what they had learned about English yes/no ques- +tions. Across several evaluation paradigms, these +models failed to generalize in human-like ways: +Humans display hierarchical generalization, while +the models’ generalization was instead based on +linear order and individual words’ identities. Our +results support the hypothesis that human-like lin- +guistic generalization requires biases stronger than +those of LSTMs and Transformers. Future work +should investigate what inductive biases enable suc- +cessful generalization. One approach would be to +test architectures with built-in hierarchical struc- +ture; past work has shown that such architectures +have a hierarchical bias (McCoy et al., 2020) and +generalize better on the hierarchical phenomenon +of subject-verb agreement (Kuncoro et al., 2018; +Lepori et al., 2020), so they may also generalize bet- +ter on English yes/no questions. A final direction +would be to expand the input beyond words alone +so that learners can leverage hierarchical structure +that is present in other modalities, such as hierar- +chical structure in visual scenes. +Ethics Statement +Use of human data: +While we did not collect +any new human data ourselves, many of our anal- +yses involved the use of prior datasets within the + +CHILDES database. All of these datasets were +collected in accordance with IRB policies at the +institutions of the data collectors, and all followed +standard practices in obtaining informed consent +and deidentifying data.9 +Risks and limitations: +The main risk of our pro- +posed analyses is that future work using the same +analyses might draw overly strong conclusions +based on increased model performance, leading +to overestimates of model strength. Such overesti- +mates are an issue because they can lead users to +place more trust in a model than is warranted. +To clarify, we view strong performance on our +evaluation datasets as necessary but not sufficient to +demonstrate human-like learning. Thus, if models +perform poorly on our datasets (as the models we +evaluated did), then we have strong reason to con- +clude that models are not learning in human-like +ways. If future models perform better, such results +would be consistent with human-like learning but +would not conclusively establish that models learn +as humans do, as they might instead be using some +shallow heuristic that is not controlled for in our +datasets. In other words, a criterion that is neces- +sary but not sufficient facilitates strong conclusions +about failure but does not facilitate strong conclu- +sions about success. If future papers are faced with +models that are more successful, such papers would +ideally supplement results based on our datasets +with analyses of models’ internal strategies in order +to more conclusively establish that what they have +learned is not a spurious heuristic. +References +Omri Abend, Tom Kwiatkowski, Nathaniel J Smith, +Sharon Goldwater, and Mark Steedman. 2017. Boot- +strapping language acquisition. Cognition, 164:116– +143. +Gerry TM Altmann and Yuki Kamide. 1999. Incremen- +tal interpretation at verbs: Restricting the domain of +subsequent reference. Cognition, 73(3):247–264. +Ben Ambridge, Caroline F Rowland, and Julian M +Pine. 2008. Is structure dependence an innate con- +straint? New experimental evidence from children’s +complex-question production. +Cognitive Science, +32(1):222–255. +Robert Berwick, Paul Pietroski, Beracah Yankama, and +Noam Chomsky. 2011. Poverty of the stimulus re- +visited. Cognitive science, 35:1207–42. +9https://talkbank.org/share/irb/ +Rens Bod, Margaux Smets, et al. 2012. Empiricist so- +lutions to nativist problems using tree-substitution +grammars. Workshop on Computational Models of +Language Acquisition and Loss: EACL. +Noam Chomsky. 1965. Aspects of the Theory of Syntax, +50 edition. The MIT Press. +Noam Chomsky. 1980. +Rules and representations. +Columbia University Press. +Alexander Clark and Rémi Eyraud. 2007. Polynomial +identification in the limit of substitutable context- +free languages. +Journal of Machine Learning Re- +search, 8(8). +Alexander Clark and Shalom Lappin. 2010. Linguis- +tic Nativism and the Poverty of the Stimulus. John +Wiley & Sons. +Stephen Crain and Mineharu Nakayama. 1987. Struc- +ture dependence in grammar formation. Language, +pages 522–543. +Jeffrey L Elman. 1991. +Distributed representations, +simple recurrent networks, and grammatical struc- +ture. Machine learning, 7(2):195–225. +Hartmut Fitz and Franklin Chang. 2017. Meaningful +questions: The acquisition of auxiliary inversion in +a connectionist model of sentence production. Cog- +nition, 166:225–250. +Robert Frank and Donald Mathis. 2007. Transforma- +tional networks. Models of Human Language Acqui- +sition, 22. +Lila R Gleitman, Kimberly Cassidy, Rebecca Nappa, +Anna Papafragou, and John C Trueswell. 2005. +Hard words. Language learning and development, +1(1):23–64. +Kristina Gulordava, Piotr Bojanowski, Edouard Grave, +Tal Linzen, and Marco Baroni. 2018. +Colorless +green recurrent networks dream hierarchically. +John Hale. 2001. A probabilistic Earley parser as a psy- +cholinguistic model. In Second Meeting of the North +American Chapter of the Association for Computa- +tional Linguistics. +Betty Hart and Todd R Risley. 1995. Meaningful differ- +ences in the everyday experience of young American +children. Paul H Brookes Publishing. +Kenneth Heafield. 2011. KenLM: Faster and smaller +language model queries. +In Proceedings of the +Sixth Workshop on Statistical Machine Translation, +pages 187–197, Edinburgh, Scotland. Association +for Computational Linguistics. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. +Long short-term memory. +Neural computation, +9(8):1735–1780. + +Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, +and Roger Levy. 2020. +A systematic assessment +of syntactic generalization in neural language mod- +els. +In Proceedings of the 58th Annual Meeting +of the Association for Computational Linguistics, +pages 1725–1744, Online. Association for Compu- +tational Linguistics. +Philip A. Huebner, Elior Sulem, Cynthia Fisher, and +Dan Roth. 2021. BabyBERTa: Learning more gram- +mar with small-scale child-directed language. +In +Proceedings of CoNLL. +Xuân-Nga Cao Kam, Iglika Stoyneshka, Lidiya Torny- +ova, Janet D Fodor, and William G Sakas. 2008. Bi- +grams and the richness of the stimulus. Cognitive +Science, 32(4):771–787. +Reinhard Kneser and Hermann Ney. 1995. Improved +backing-off for m-gram language modeling. 1995 +International Conference on Acoustics, Speech, and +Signal Processing, 1:181–184 vol.1. +Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yo- +gatama, Stephen Clark, and Phil Blunsom. 2018. +LSTMs can learn syntax-sensitive dependencies +well, but modeling structure makes them better. In +Proceedings of the 56th Annual Meeting of the As- +sociation for Computational Linguistics (Volume 1: +Long Papers), pages 1426–1436, Melbourne, Aus- +tralia. Association for Computational Linguistics. +Marta Kutas, Katherine A DeLong, and Nathaniel J +Smith. 2011. A look around at what lies ahead: Pre- +diction and predictability in language processing. In +Predictions in the brain: Using our past to generate +a future. +Kushal Lakhotia, Eugene Kharitonov, Wei-Ning Hsu, +Yossi Adi, Adam Polyak, Benjamin Bolte, Tu-Anh +Nguyen, Jade Copet, Alexei Baevski, Abdelrahman +Mohamed, et al. 2021. On generative spoken lan- +guage modeling from raw audio. Transactions of the +Association for Computational Linguistics, 9:1336– +1354. +Howard Lasnik and Jeffrey L Lidz. 2017. The argu- +ment from the poverty of the stimulus. The Oxford +handbook of universal grammar, pages 221–248. +Julie Anne Legate and Charles D Yang. 2002. +Em- +pirical re-assessment of stimulus poverty arguments. +The Linguistic Review, 19(1-2):151–162. +Michael Lepori, Tal Linzen, and R. Thomas McCoy. +2020. +Representations of syntax [MASK] useful: +Effects of constituency and dependency structure in +recursive LSTMs. In Proceedings of the 58th An- +nual Meeting of the Association for Computational +Linguistics, pages 3306–3316, Online. Association +for Computational Linguistics. +Roger Levy. 2008. Expectation-based syntactic com- +prehension. Cognition, 106(3):1126–1177. +John Lewis and Jeffrey Elman. 2001. Learnability and +the statistical structure of language: Poverty of stim- +ulus arguments revisited. Proceedings of the 26th +Annual Boston University Conference on Language +Development, 1. +Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. +Open sesame: +Getting inside BERT’s linguistic +knowledge. In Proceedings of the 2019 ACL Work- +shop BlackboxNLP: Analyzing and Interpreting Neu- +ral Networks for NLP, pages 241–253, Florence, +Italy. Association for Computational Linguistics. +Tal Linzen. 2020. How can we accelerate progress to- +wards human-like linguistic generalization? In Pro- +ceedings of the 58th Annual Meeting of the Asso- +ciation for Computational Linguistics, pages 5210– +5217, Online. Association for Computational Lin- +guistics. +Brian MacWhinney. 2000. +The CHILDES project: +Tools for analyzing talk. Lawrence Erlbaum Asso- +ciates. +R. Thomas McCoy, Robert Frank, and Tal Linzen. +2018. Revisiting the poverty of the stimulus: hier- +archical generalization without a hierarchical bias in +recurrent neural networks. +R. Thomas McCoy, Robert Frank, and Tal Linzen. +2020. Does syntax need to grow on trees? sources of +hierarchical inductive bias in sequence-to-sequence +networks. +James L. Morgan and Katherine Demuth. 1996. Signal +to syntax: Bootstrapping from speech to grammar in +early acquisition. Psychology Press. +Aaron Mueller, Robert Frank, Tal Linzen, Luheng +Wang, and Sebastian Schuster. 2022. Coloring the +blank slate: Pre-training imparts a hierarchical in- +ductive bias to sequence-to-sequence models. +In +Findings of the Association for Computational Lin- +guistics: ACL 2022, pages 1352–1368, Dublin, Ire- +land. Association for Computational Linguistics. +Karl Mulligan, Robert Frank, and Tal Linzen. 2021. +Structure here, bias there: Hierarchical generaliza- +tion by jointly learning syntactic transformations. In +Proceedings of the Society for Computation in Lin- +guistics 2021, pages 125–135, Online. Association +for Computational Linguistics. +Ludovica Pannitto and Aurélie Herbelot. 2020. Recur- +rent babbling: evaluating the acquisition of gram- +mar from limited input data. +In Proceedings of +the 24th Conference on Computational Natural Lan- +guage Learning, pages 165–176, Online. Associa- +tion for Computational Linguistics. +Adam Pauls and Dan Klein. 2012. Large-scale syntac- +tic language modeling with treelets. In Proceedings +of the 50th Annual Meeting of the Association for +Computational Linguistics (Volume 1: Long Papers), +pages 959–968, Jeju Island, Korea. Association for +Computational Linguistics. + +Lisa Pearl. 2021. Poverty of the stimulus without tears. +Language Learning and Development, pages 1–40. +Lisa Pearl and Benjamin Mis. 2016. The role of in- +direct positive evidence in syntactic acquisition: A +look at anaphoric one. Language, 92:1–30. +Lisa Pearl and Jon Sprouse. 2013a. +Computational +models of acquisition for islands. Experimental syn- +tax and islands effects, pages 109–131. +Lisa Pearl and Jon Sprouse. 2013b. Syntactic islands +and learning biases: Combining experimental syntax +and computational modeling to investigate the lan- +guage acquisition problem. Language Acquisition, +20(1):23–68. +Andrew Perfors, Josh Tenenbaum, and Terry Regier. +2011. The learnability of abstract syntactic princi- +ples. Cognition, 118:306–338. +Jackson Petty and Robert Frank. 2021. +Trans- +formers +generalize +linearly. +arXiv +preprint +arXiv:2109.12036. +Geoffrey K. Pullum and Barbara C. Scholz. 2002. Em- +pirical assessment of stimulus poverty arguments. +The Linguistic Review, 18(1-2):9–50. +Florencia Reali and Morten H. Christiansen. 2005. Un- +covering the richness of the stimulus: Structure de- +pendence and indirect statistical evidence. +Cogni- +tive Science, 29(6):1007–1028. +Haoyue Shi, Jiayuan Mao, Kevin Gimpel, and Karen +Livescu. 2019. Visually grounded neural syntax ac- +quisition. In Proceedings of the 57th Annual Meet- +ing of the Association for Computational Linguistics, +pages 1842–1861, Florence, Italy. Association for +Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob +Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz +Kaiser, and Illia Polosukhin. 2017. Attention is all +you need. In Advances in neural information pro- +cessing systems, pages 5998–6008. +Alex Warstadt and Samuel R Bowman. 2020. Can neu- +ral networks acquire a structural bias from raw lin- +guistic data? Proceedings of the 42nd Annual Con- +ference of the Cognitive Science Society. +Alex Warstadt and Samuel R Bowman. 2022. What +artificial +neural +networks +can +tell +us +about +human language acquisition. +arXiv preprint +arXiv:2208.07998. +Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo- +hananey, Wei Peng, Sheng-Fu Wang, and Samuel R +Bowman. 2020a. +BLiMP: The benchmark of lin- +guistic minimal pairs for english. +Transactions +of the Association for Computational Linguistics, +8:377–392. +Alex Warstadt, Yian Zhang, Xiaocheng Li, Haokun +Liu, and Samuel R. Bowman. 2020b. +Learning +which features matter: RoBERTa acquires a prefer- +ence for linguistic generalizations (eventually). In +Proceedings of the 2020 Conference on Empirical +Methods in Natural Language Processing (EMNLP), +pages 217–235, Online. Association for Computa- +tional Linguistics. +Ethan Wilcox, Richard Futrell, and Roger Levy. 2021. +Using computational models to test syntactic learn- +ability. lingbuzz preprint lingbuzz/006327. +Ethan Wilcox, Roger Levy, Takashi Morita, and +Richard Futrell. 2018. +What do RNN language +models learn about filler–gap dependencies? +In +Proceedings of the 2018 EMNLP Workshop Black- +boxNLP: Analyzing and Interpreting Neural Net- +works for NLP, pages 211–221, Brussels, Belgium. +Association for Computational Linguistics. +Taha Yasseri, András Kornai, and János Kertész. 2012. +A practical approach to language complexity: A +Wikipedia case study. PLoS ONE, 7(11):e48386. +A +CHILDES preprocessing details +The train, test, and validation split kept each docu- +ment in the corpora intact to allow for learning of +context. Since a document roughly correspond to +a single recording session, and the sentence order +within each document was not randomized, the net- +works could utilize cross sentence context while +predicting the next word. +Generally, we kept the data as close to the actual +input that the child receives as possible. However, +in some cases we modified tokenization to match +the CHILDES Treebank, a syntactically parsed sub- +set of the CHILDES corpora. For instance, con- +tractions were split, e.g. we replaced don’t with do +n’t, +The ages of the children vary by corpus, ranging +from six months to twelve years. Almost 95% +(49/52) of the corpora consist of transcriptions with +children between one and six years of age. +Note that for Experiment 2, we used the same vo- +cabulary as we used in Experiment 1, which means +that the words that were not present in the Exper- +iment 1’s vocabulary were replaced with +tokens. +The unprocessed CHILDES datasets were down- +loaded in XML format from the online XML ver- +sion10 of the CHILDES database (MacWhinney, +2000).11 +A modified NLTK CHILDESCorpus- +10https://childes.talkbank.org/ +data-xml/ +11https://childes.talkbank.org + +Reader12 was used to parse the XML into plain +text for training. +The CHILDES dataset is licensed for use under +a CC BY-NC-SA 3.0 license13. Under the terms of +this license, the data can be freely used and adapted, +as long as it is not used for commercial purposes +and as long as attribution is provided.14 Our usage +fits these criteria. +Though CHILDES contains many corpora of +many languages, we use only corpora from the +North American English subset of CHILDES, +which contains child-directed speech with many +different North American children. +See the +CHILDES database for more details. +By the CHILDES rules for data citation.15 re- +search that relies on more than 6 of the corpora +need only cite the overall database, not each indi- +vidual corpus. +All the data on CHILDES must adhere to +IRB guidelines,16 including a requirement for +anonymity. +The final dataset will be included in our GitHub +repository, to be released soon. This dataset is not +intended for commercial use. +CHILDES corpora included +The CHILDES +corpora that we used were: Bates, Bernstein, Bliss, +Bloom70, Bloom73, Bohannon, Braunwald, Brent, +Brown, Carterette, Clark, Cornell, Demetras1, +Demetras2, EllisWeismer, Evans, Feldman, Garvey, +Gathercole, Gelman, Gillam, Gleason, HSLLD, +Haggerty, Hall, Higginson, Kuczaj, MacWhin- +ney, McCune, McMillan, Morisset, NH, Nelson, +NewEngland, NewmanRatner, Normal, POLER, +Peters, Post, Rollins, Sachs, Sawyer, Snow, Soder- +strom, Sprott, Suppes, Tardif, Valian, VanHouten, +VanKleeck, Warren, Weist. +B +Hyperparameter Search and Model +Implementation +We conducted a hyperparameter search for each +of the architectures we investigated (LSTMs and +Transformers). Our broad goal in this paper is to +investigate the extent to which capturing the statis- +tical properties of the CHILDES dataset naturally +12https://www.nltk.org/howto/childes. +html +13https://talkbank.org/share/rules.html +14https://creativecommons.org/licenses/ +by-nc-sa/3.0/ +15https://talkbank.org/share/citation. +html +16https://talkbank.org/share/irb/ +leads a learner to capture the structure of yes/no +questions. Therefore, we sought to find the hyper- +parameter settings that made models most effective +at capturing the statistical properties of CHILDES +data, a goal which we operationalized as finding +the model with the lowest perplexity. +B.1 +Hyperparameter search +LSTMs +For LSTMs we explored the following +hyper-parameters via a grid search for a total of +144 models. +1. layers: 2 +2. hidden and embedding size: 200, 800 +3. batch size: 20, 80 +4. dropout rate: 0.0, 0.2, 0.4, 0.6 +5. learning rate: 5.0, 10.0, 20.0 +6. random seed: 3 per parameter combination, +unique for each LSTM +The LSTM model with the lowest perplexity on the +validation set after training had 2 layers, a hidden +and embedding size of 800, a batch size of 20, a +dropout rate of 0.4, and a learning rate of 10.17 +A LSTM model with these hyperparameters has +37,620,294 parameters. +Transformers +For the Transformers we per- +formed a hyperparameter sweep over the following +hyper-parameters for a total of 84 models. +1. layers: 2, 4, 8, 16 +2. context size: 50, 100, 500 +3. hidden and embedding size: 200, 800, 1600 +4. heads: 2, 4, 8, 16 +5. batch size: 20, 80, 160 +6. dropout rate: 0.0, 0.2, 0.4, 0.6 +7. learning rate: 0.5, 1.0, 5.0, 10.0, 20.0 +8. random seed: 3 per parameter combination +17The hyperparameters we explored for the LSTMs +were +those +of +Gulordava +et +al. +(2018), +the +code +for which can be found at +https://github.com/ +facebookresearch/colorlessgreenRNNs + +LSTMs +Prepose First +Prepose Main +Delete First +0.01 +0.14 +Delete Main +0.39 +0.12 +Delete None +0.20 +0.14 +Table 1: Numerical results for LSTMs’ preference for +questions consistent with combinations of ‘prepose’ +and ‘delete’ rules. Within each architecture, the propor- +tion preferences across all 6 question types necessarily +sum to 1. +The Transformer model with the lowest perplexities +after training had 4 layers, a context size of 500, +a hidden size of 800, a batch size of 10, 4 heads, +a dropout rate of 0.2, and a learning rate of 5.0. +A Transformer model with these parameters has +42,759,494 parameters. +B.2 +Comment on model size +Although neural networks generally perform better +as they increase in size, the best-performing models +that we found were not the largest ones. This re- +sult is consistent with the finding of Warstadt et al. +(2020b) that, for small training sets, smaller lan- +guage models sometimes outperform larger ones. +Thus, it is unlikely that scaling up models beyond +the range we investigated would have yielded bet- +ter CHILDES language models than the ones we +trained. +B.3 +Implementation +All +models +were +implemented +in +Py- +Torch +by +building +on +code +from +https: +//github.com/facebookresearch/ +colorlessgreenRNNs +and +https: +//github.com/pytorch/examples/ +tree/main/word_language_model, +and +trained using Nvidia k80 GPUs. The final models +will be included in our GitHub repository, which +will be released soon. +These models are not +intended for commercial use. +C +PREPOSE-ONE&DELETE-ONE Full +Results +See Table 1 and Table 2 for these results. See Table +1 and Table 2 for these results. +C.1 +Results using SLOR +See Table 3 and Table 4 for these results. +Transformers +Prepose First +Prepose Main +Delete First +0.01 +0.16 +Delete Main +0.31 +0.06 +Delete None +0.25 +0.21 +Table 2: Numerical results for Transformers’ prefer- +ence for questions consistent with combinations of ‘pre- +pose’ and ‘delete’ rules. Within each architecture, the +proportion preferences across all 6 question types nec- +essarily sum to 1. +LSTMs +Prepose First +Prepose Main +Delete First +0.01 +0.14 +Delete Main +0.33 +0.80 +Delete None +0.26 +0.18 +Table 3: Analysis of LSTMs’ preference for questions +consistent with combinations of ‘prepose’ and ‘delete’ +rules, evaluated using SLOR. Within each architecture, +the proportion preferences across all 6 question types +necessarily sum to 1. +Transformers +Prepose First +Prepose Main +Delete First +0.01 +0.15 +Delete Main +0.27 +0.40 +Delete None +0.29 +0.24 +Table 4: Analysis of Transformers’ preference for ques- +tions consistent with combinations of ‘prepose’ and +‘delete’ rules, evaluated using SLOR. Within each ar- +chitecture, the proportion preferences across all 6 ques- +tion types necessarily sum to 1. +D +BabyBERTa dataset evaluation +For an illustrative subset of the results on the Zorro +evaluation dataset (discussed in Section 4.5), see +Figure 4. For the full results, see Figure 5. +E +Move-One Dataset Results +One approach used in several past papers (e.g., +Lewis and Elman (2001) and Reali and Chris- +tiansen (2005)) is to evaluate models using pairs +of sentences that can be formed by starting with a +declarative sentence (e.g., (17)) and moving one of +its auxiliaries to the front of the sentence. The first +sentence in each pair (e.g., (18a) ) follows HIER- +ARCHICALQ, because the main auxiliary is moved, +while the second (e.g., (18b)), follows LINEARQ +because the first auxiliary is moved. +(17) The children who are talking are sleeping. +(18) a. Are the children who are talking sleeping? +b. Are the children who talking are sleeping? + +1 +0.46 +0.98 +0.41 +0.95 +0.99 +0.97 +0.95 +0.64 +0.99 +0.43 +0.96 +0.41 +0.88 +0.95 +0.96 +0.97 +0.54 +LSTM 02 +LSTM 03 +LSTM 08 +Transformer 02 +Transformer 03 +Transformer 08 +irreg_v +sv_agr_rc +swap_arg +Zorro Evaluation +Model +0.5 +0.6 +0.7 +0.8 +0.9 +1.0 +Proportion +Correct +Figure 4: The performance of a selected subset of +model re-runs on a selected subset of the Zorro evalua- +tions. Each Zorro evaluation targets a specific syntactic +phenomenon—in the cases shown here, irregular verbs, +subject-verb agreement across relative clauses, and cor- +rect argument ordering. +If a model assigns a higher probability to (18a) +than (18b), that is evidence that the models favors +HIERARCHICALQ over LINEARQ. While this pref- +erence is a necessary component of correctly learn- +ing HIERARCHICALQ, it is by no means sufficient: +indeed, Kam et al. (2008) showed that models can +prefer sentences consistent with HIERARCHICALQ +over sentences consistent with LINEARQ due to +shallow n-gram statistics rather than due to knowl- +edge of hierarchical structure. +More generally, +there are infinitely many other incorrect hypotheses +besides LINEARQ, and demonstrating successful +learning of HIERARCHICALQ would require ruling +out all of them. Investigating all possibilities is +intractable, but we can at least investigate a few +additional plausible ones. Thus, in the main paper +we depart from prior work by considering a greater +number of candidate sentences than just the pairs +of sentences used in prior work. +To create the MOVE-ONE dataset, we ran- +domly sampled 10,000 declarative sentences from +our CFGs for which the first and main auxiliary +were identical and then modified them to give +10,000 sentence pairs. To create the PREPOSE- +ONE&DELETE-ONE dataset, we randomly sam- +pled a different 10,000 declarative sentences from +our CFGs for which the first and main auxiliary +were different and then we modified them to give +10,000 6-tuples of sentences. See Appendix F for +more details about the CFGs. +F +Context Free Grammars +Figure 6 contains the context-free grammar used +for the analyses in Section 4.6. Figures 7 and 8 con- +tain the context-free grammars used for the targeted +evaluation sets in Section 5.2. Figure 9 contains +the vocabulary used for all of these datasets. +G +Breakdown by lexical identity +Here we further break down models’ predictions +for the FIRST-AUX ̸= MAIN-AUX evaluation set +based on the identities of the two auxiliaries in +the input sentence. Figure 10 gives the results for +the LSTM in the QUESTION FORMATION condi- +tion; Figure 11 for the LSTM in the NEXT-WORD +PREDICTION + QUESTION FORMATION condi- +tion; Figure 12 for the Transformer in the QUES- +TION FORMATION condition; and Figure 13 for the +for the Transformer in the NEXT-WORD PREDIC- +TION + QUESTION FORMATION condition. +H +Example generated text +Figure 14 gives some example text generated by our +models. Models trained on next-word prediction +produce their predictions as a probability distribu- +tion over the vocabulary. To use such models to +generate text, we sample a word from this distribu- +tion then use that word as the model’s input for the +next time step. + +89 79 +67 89 86 82 98 91 100 92 100 97 85 +88 60 56 78 40 +67 87 71 59 96 +89 100 91 100 96 85 88 61 +46 39 39 74 88 +78 59 98 88 88 +62 89 79 83 98 +59 56 +68 41 60 86 69 66 +95 90 87 80 89 +68 80 99 88 100 90 100 94 85 92 +61 98 +87 88 85 88 80 82 +97 90 100 93 99 +96 83 86 59 43 +48 39 68 90 73 +85 97 +91 100 93 100 95 84 +91 60 60 82 38 +64 89 77 63 96 +85 79 88 89 80 +84 89 +61 53 51 36 61 89 +74 62 99 88 83 +89 88 79 80 98 +90 99 93 100 94 +86 73 +63 95 90 85 60 89 +58 77 97 90 100 90 100 97 83 89 +60 50 76 37 70 +90 81 +81 97 90 100 93 100 95 84 89 64 50 +54 37 64 89 73 +61 97 89 85 80 +100 96 +81 87 61 55 72 36 +73 93 73 61 98 +87 89 64 89 83 +81 96 90 100 91 +38 70 +90 76 59 97 89 84 +78 87 85 78 98 +88 100 91 100 97 +82 91 58 46 69 +79 77 +46 89 54 62 92 77 +99 81 99 95 72 +75 54 53 61 42 +50 81 64 61 83 +82 97 +83 99 94 71 78 54 +43 72 45 48 85 +71 59 96 80 74 +58 91 62 62 93 +54 48 +77 41 62 79 73 58 +88 81 79 65 91 +59 65 95 80 93 +88 98 96 74 78 +60 88 +75 74 61 92 51 63 +95 84 100 88 98 +93 73 80 53 38 +78 42 53 83 73 +70 95 +81 99 83 98 95 74 +77 54 36 50 43 +47 83 65 58 90 +78 72 82 92 53 +74 76 +53 44 55 42 47 80 +72 59 90 78 72 +87 92 58 66 96 +80 99 88 98 94 +80 73 +61 93 77 72 90 91 +53 65 90 80 98 +90 98 95 73 83 +55 43 44 44 33 +91 67 +64 96 83 98 81 100 97 72 79 54 40 +58 40 54 80 71 +60 92 83 69 38 +98 96 +71 75 55 47 53 42 +49 79 86 58 88 +79 81 64 91 61 +62 92 84 97 85 +42 53 +86 74 63 88 76 80 +93 90 53 63 94 +83 99 86 96 96 +75 82 55 50 53 +LSTM 01 +LSTM 02 +LSTM 03 +LSTM 04 +LSTM 05 +LSTM 06 +LSTM 07 +LSTM 08 +LSTM 09 +LSTM 10 +Transformer 01 +Transformer 02 +Transformer 03 +Transformer 04 +Transformer 05 +Transformer 06 +Transformer 07 +Transformer 08 +Transformer 09 +Transformer 10 +agreement_determiner_noun−across_1_adjective +agreement_determiner_noun−between_neighbors +agreement_subject_verb−across_prepositional_phrase +agreement_subject_verb−across_relative_clause +agreement_subject_verb−in_question_with_aux +agreement_subject_verb−in_simple_question +anaphor_agreement−pronoun_gender +argument_structure−dropped_argument +argument_structure−swapped_arguments +argument_structure−transitive +binding−principle_a +case−subjective_pronoun +ellipsis−n_bar +filler−gap−wh_question_object +filler−gap−wh_question_subject +irregular−verb +island−effects−adjunct_island +island−effects−coordinate_structure_constraint +local_attractor−in_question_with_aux +npi_licensing−matrix_question +npi_licensing−only_npi_licensor +quantifiers−existential_there +quantifiers−superlative +Evaluation +Model +40 +60 +80 +100 +% Correct +Figure 5: Results on the targeted syntactic evaluations in Huebner et al. (2021) in percent accuracy. Evaluation +names in Figure 4 were shortened. + +S +→ {NP_S RC_S_BARE MAIN-AUX VP_S_PAST} +S +→ {NP_S RC_S_PAST MAIN-AUX VP_S_BARE} +S +→ {NP_S RC_S_BARE MAIN-AUX VP_S_PROG} +S +→ {NP_S RC_S_PROG MAIN-AUX VP_S_BARE} +S +→ {NP_S RC_S_PAST MAIN-AUX VP_S_PROG} +S +→ {NP_S RC_S_PROG MAIN-AUX VP_S_PAST} +S +→ {NP_P RC_P_BARE MAIN-AUX VP_P_PAST} +S +→ {NP_P RC_P_PAST MAIN-AUX VP_P_BARE} +S +→ {NP_P RC_P_BARE MAIN-AUX VP_P_PROG} +S +→ {NP_P RC_P_PROG MAIN-AUX VP_P_BARE} +S +→ {NP_P RC_P_PAST MAIN-AUX VP_P_PROG} +S +→ {NP_P RC_P_PROG MAIN-AUX VP_P_PAST} +NP_S +→ {Det_S N_S} +NP_P +→ {Det_P N_P} +NP_O +→ {Det_S N_S | Det_P N_P | Det_S N_S Prep Det_S N_S | Det_S N_S Prep +Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P} +VP_S_BARE +→ {Aux_S IV } +VP_S_BARE +→ {Aux_S TV NP_O} +VP_S_PROG +→ {Aux_S_BE IV_IS} +VP_S_PROG +→ {Aux_S_BE TV_IS NP_O} +VP_S_PAST +→ {Aux_S_HAS IV_HAS} +VP_S_PAST +→ {Aux_S_HAS TV_HAS NP_O} +VP_P_BARE +→ {Aux_P IV} +VP_P_BARE +→ {Aux_P TV NP_O} +VP_P_PROG +→ {Aux_P_BE IV_IS} +VP_P_PROG +→ {Aux_P_BE TV_IS NP_O} +VP_P_PAST +→ {Aux_P_HAS IV_HAS} +VP_P_PAST +→ {Aux_P_HAS TV_HAS NP_O} +RC_S_BARE +→ {Rel Aux_S IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | +Rel Aux_S TV Det_S N_S | Rel Aux_S TV Det_P N_P} +RC_S_PROG +→ {Rel Aux_S_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P +N_P Aux_P_BE TV_IS | Rel Aux_S_BE TV_IS Det_S N_S | Rel Aux_S_BE +TV_IS Det_P N_P} +RC_S_PAST +→ {Rel Aux_S_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel +Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_S_HAS TV_HAS Det_S N_S | +Rel Aux_S_HAS TV_HAS Det_P N_P} +RC_P_BARE +→ {Rel Aux_P IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | +Rel Aux_P TV Det_S N_S | Rel Aux_P TV Det_P N_P} +RC_P_PROG +→ {Rel Aux_P_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P +N_P Aux_P_BE TV_IS | Rel Aux_P_BE TV_IS Det_S N_S | Rel Aux_P_BE +TV_IS Det_P N_P} +RC_P_PAST +→ {Rel Aux_P_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel +Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_P_HAS TV_HAS Det_S N_S | +Rel Aux_P_HAS TV_HAS Det_P N_P} +Figure 6: CFG used to generate PREPOSE-ONE-AND-DELETE-ONE evaluation dataset + +S +→ {NP_M_S VP_M_S | NP_M_P VP_M_P} +NP_M_S→ {Det_S N_S | Det_S N_S Prep Det_S N_S | Det_S N_S Prep Det_P N_P} +NP_M_P→ {Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P} +NP_O +→ {Det_S N_S | Det_P N_P | Det_S N_S Prep Det_S N_S | Det_S N_S Prep +Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P | Det_S +N_S RC_S | Det_P N_P RC_P } +VP_M_S→ {Aux_S IV } +VP_M_S→ {Aux_S TV NP_O} +VP_M_S→ {Aux_S_BE IV_IS} +VP_M_S→ {Aux_S_BE TV_IS NP_O} +VP_M_S→ {Aux_S_HAS IV_HAS} +VP_M_S→ {Aux_S_HAS TV_HAS NP_O} +VP_M_P→ {Aux_P IV} +VP_M_P→ {Aux_P TV NP_O} +VP_M_P→ {Aux_P_BE IV_IS} +VP_M_P→ {Aux_P_BE TV_IS NP_O} +VP_M_P→ {Aux_P_HAS IV_HAS} +VP_M_P→ {Aux_P_HAS TV_HAS NP_O} +RC_S +→ {Rel Aux_S IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | +Rel Aux_S TV Det_S N_S | Rel Aux_S TV Det_P N_P} +RC_S +→ {Rel Aux_S_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P +N_P Aux_P_BE TV_IS | Rel Aux_S_BE TV_IS Det_S N_S | Rel Aux_S_BE +TV_IS Det_P N_P} +RC_S +→ {Rel Aux_S_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel +Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_S_HAS TV_HAS Det_S N_S | +Rel Aux_S_HAS TV_HAS Det_P N_P} +RC_P +→ {Rel Aux_P IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | +Rel Aux_P TV Det_S N_S | Rel Aux_P TV Det_P N_P} +RC_P +→ {Rel Aux_P_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P +N_P Aux_P_BE TV_IS | Rel Aux_P_BE TV_IS Det_S N_S | Rel Aux_P_BE +TV_IS Det_P N_P} +RC_P +→ {Rel Aux_P_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel +Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_P_HAS TV_HAS Det_S N_S | +Rel Aux_P_HAS TV_HAS Det_P N_P} +Figure 7: CFG used to generate FIRST-AUX = MAIN-AUX evaluation dataset + +S +→ {NP_M_S VP_M_S | NP_M_P VP_M_P} +NP_M_S→ {Det_S N_S | Det_S N_S Prep Det_S N_S | Det_S N_S Prep Det_P N_P} +NP_M_P→ {Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P} +NP_O +→ {Det_S N_S | Det_P N_P | Det_S N_S Prep Det_S N_S | Det_S N_S Prep +Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P | Det_S +N_S RC_S | Det_P N_P RC_P } +VP_M_S→ {Aux_S IV } +VP_M_S→ {Aux_S TV NP_O} +VP_M_S→ {Aux_S_BE IV_IS} +VP_M_S→ {Aux_S_BE TV_IS NP_O} +VP_M_S→ {Aux_S_HAS IV_HAS} +VP_M_S→ {Aux_S_HAS TV_HAS NP_O} +VP_M_P→ {Aux_P IV} +VP_M_P→ {Aux_P TV NP_O} +VP_M_P→ {Aux_P_BE IV_IS} +VP_M_P→ {Aux_P_BE TV_IS NP_O} +VP_M_P→ {Aux_P_HAS IV_HAS} +VP_M_P→ {Aux_P_HAS TV_HAS NP_O} +RC_S +→ {Rel Aux_S IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | +Rel Aux_S TV Det_S N_S | Rel Aux_S TV Det_P N_P} +RC_S +→ {Rel Aux_S_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P +N_P Aux_P_BE TV_IS | Rel Aux_S_BE TV_IS Det_S N_S | Rel Aux_S_BE +TV_IS Det_P N_P} +RC_S +→ {Rel Aux_S_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel +Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_S_HAS TV_HAS Det_S N_S | +Rel Aux_S_HAS TV_HAS Det_P N_P} +RC_P +→ {Rel Aux_P IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | +Rel Aux_P TV Det_S N_S | Rel Aux_P TV Det_P N_P} +RC_P +→ {Rel Aux_P_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P +N_P Aux_P_BE TV_IS | Rel Aux_P_BE TV_IS Det_S N_S | Rel Aux_P_BE +TV_IS Det_P N_P} +RC_P +→ {Rel Aux_P_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel +Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_P_HAS TV_HAS Det_S N_S | +Rel Aux_P_HAS TV_HAS Det_P N_P} +Figure 8: CFG used to generate FIRST-AUX ̸= MAIN-AUX evaluation dataset + +Det_S +→ {the | some | this } +Det_P +→ {the | some | those} +N_S +→ {baby | girl | boy | animal | child | person | horse } +N_P +→ {babies | girls | boys | animals | children | people | horses } +IV +→ {play | read | draw | sit | fall | talk | sleep | try | work | walk} +IV_IS +→ {playing | reading | drawing | sitting | falling | talking | sleeping | trying | +working | walking} +IV_HAS +→ {played | read | drawn | sat | fallen | talked | slept | tried | worked | walked} +TV +→ {call | see | find | help | feed | know | pick | visit | watch | reach} +TV_IS +→ {calling | seeing | finding | helping | feeding | knowing | picking | visiting | +watching | reaching} +TV_HAS +→ {called | seen | found | helped | fed | known | picked | visited | watched | +reached} +Aux_P +→ {do | did | can | would | shall} +Aux_S +→ {does | did | can | would | shall} +Aux_S_BE → {is | was} +Aux_P_BE → {are | were} +Aux_S_HAS→ {has} +Aux_P_HAS→ {have} +Prep +→ {by | behind } +Rel +→ {who | that } +Figure 9: Vocabulary used for the PREPOSE-ONE-AND-DELETE-ONE, FIRST-AUX ̸= MAIN-AUX, and FIRST- +AUX = MAIN-AUX evaluation datasets +Figure 10: Breakdown by the identities of the two auxiliaries for outputs in the FIRST-AUX ̸= MAIN-AUX eval- +uation set for LSTMs first trained on next-word prediction and then question formation. The two leftmost bars in +each cell show a First-vs-main comparison, while the two rightmost bars show an AuxY-vs-AuxX comparison. + +AuxX = +AuxX = +Auxx = +AuxX = +Auxx = +AuxX = +AuxX = +AuxX = +AuxX = +AuxX = +AuxX = +was +have +can +were +shall +p!p +would +does +op +are +s +1.0 +0.5 +0.0 +1 +11 +1.0 +0.5 +0.0 +1.0 +0.5 +0.0 +TIT +1.0 +behavior consistent +0.5 +0.0 +1.0 +0.5 +0.0 +1.0 +Comparison +0.5 +First-vs-main +0.0 +AuxY-vs-AuxX +0.0 +1.0 +word +0.5 +0.0 +1.0 +First +0.5 +0.0 +1.0 +0.5 +0.0 +1.0 +0.5 +0.0Figure 11: Breakdown by the identities of the two auxiliaries for outputs in the FIRST-AUX ̸= MAIN-AUX evalu- +ation set for LSTMs trained only on question formation. The two leftmost bars in each cell show a First-vs-main +comparison, while the two rightmost bars show an AuxY-vs-AuxX comparison. +Figure 12: Breakdown by the identities of the two auxiliaries for outputs in the FIRST-AUX ̸= MAIN-AUX evalua- +tion set for Transformers first trained on next-word prediction and then question formation. The two leftmost bars +in each cell show a First-vs-main comparison, while the two rightmost bars show an AuxY-vs-AuxX comparison. + +AuxX = +AuxX = +Auxx = +AuxX = +AuxX = +AuxX = +AuxX = +AuxX = +AuxX = +AuxX = +AuxX = +was +have +can +were +shall +p!p +would +does +op +are +S +1.0 +0.5 +0.0 +1 +1.0 +0.5 +0.0 +1.0 +0.5 +0.0 +1.0 +0.5 +1.0 +Comparison +0.5 +First-vs-main + 0.0 +AuxY-vs-AuxX +1.0 +0.5 +Aux +0.0 +0.5 +0.0 +1.0 +0.5 +0'0 +1.0 +0.5 +0.0 +1.0 +0.5 +0.0 -AuxX = +AuxX = +AuxX = +AuxX = +Auxx = +AuxX = +AuxX = +AuxX = +AuxX = +AuxX = +AuxX = +was +have +can +were +shall +pIp +would +does +op +are +s +1.0 +0.5 +0.0 +1 +1.0 +0.5 +0.0 +T +1.0 +0.5 +0.0 +1.0 +behavior consistent +0.5 +Aux +0.0 +05 +Comparison +First-vs-main + 0.0 +AuxY-vs-AuxX +AuxY +0.0 +1.0 +word +0.5 +0.0 +1.0 +First +0.5 +0.0 +1.0 +0.5 +0.0 +1.0 +0.5 +0.0 -Figure 13: Breakdown by the identities of the two auxiliaries for outputs in the FIRST-AUX ̸= MAIN-AUX evalu- +ation set for Transformers trained only on question formation. The two leftmost bars in each cell show a First-vs- +main comparison, while the two rightmost bars show an AuxY-vs-AuxX comparison. +Text +from +CHILDES +do you eat in nursery school too ? oh . it ’s nursery school . xxx . what do you +get to eat ? what else do you get to eat besides a book ? do n’t play with those +books . what else do you have ? what do you have in nursery school ? alright - +you can take it . why do n’t you put the turtle back . well - you ’re not going to +change his water now . tonight . tonight - you change it about ten times a day . +no - you can do it later . forget it now . too bad . too bad . that poor turtle . you +do n’t let him sit there five minutes . that was five minutes ? +LSTM gen- +erated text +do you eat in nursery school too ? oh . it ’s nursery school . no ! tomorrow ’s +Friday . right . yeah - yeah . yeah I know what . I told you that he liked +snipsnap that way . well this is address the grayish kings . ok . what kind of +bird do you want to play with ? what else do you want to do ? your crayons . +got a circle . I ’m going to use these . there . remember ? ok do you want to put +all of these there ? ok let ’s put everything away . what do you want to do ? you +want to play in the box ? it ’s just like a ball because it has a ball on it . +Transformer +generated +text +do you eat in nursery school too ? oh . it ’s nursery school . mm-hm . and it +looks like a little girl . mm-hm . like a little girl . hm , mm-hm . looks like she +’s concerned about baseball this is a picture of a double football player . mm-hm +. um , it has a uniform on it . mm-hm . anything more ? no . ok great . got a hat +on . and a flower and a little flower . remember that ? do you remember that +word ? what do you remember that word ? do you remember what it word says +? it says seven - eight . what ’s that word ? it says eight . look at that word . +three - four - six ... eight - nine ... +Figure 14: Comparison of text generated by the LSTM and Transformer models with a block of text chosen +randomly from the training data. The LSTMs and Transformers were both seeded with the first three sentences +of the text taken from CHILDES, which is the underlined in the two model generated texts. Note that neither of +the model generated texts were cherry picked either for quality or to be representative of the models’ usual output: +rather they were the first things they generated when seeded with the above underlined portion. + +AuxX = +AuxX = +AuxX = +AuxX = +AuxX = +AuxX = +AuxX = +AuxX = +AuxX = +AuxX = +AuxX = +was +have +can +were +shall +pIp +pinom +does +op +are +s +1.0 +0.5 +0.0 +1.0 +0.5 + rule +0.0 +1.0 +I behavior consistent with +881 +18 +AUXY +0.0 +Comparison +First-vs-main +AuxY-vs-AuxX +0.0 +1.0 +0.5 +0.0 +1.0 +0.5 +0.0 \ No newline at end of file diff --git a/QtFJT4oBgHgl3EQfJyy0/content/tmp_files/load_file.txt b/QtFJT4oBgHgl3EQfJyy0/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..8b878dc29a8fbb3378f755ff7fa847af1082ba02 --- /dev/null +++ b/QtFJT4oBgHgl3EQfJyy0/content/tmp_files/load_file.txt @@ -0,0 +1,1483 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf,len=1482 +page_content='How poor is the stimulus?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Evaluating hierarchical generalization in neural networks trained on child-directed speech Aditya Yedetore∗1, Tal Linzen2, Robert Frank3, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thomas McCoy∗4 1Boston University, 2New York University, 3Yale University, 4Princeton University yedetore@bu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='edu, linzen@nyu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='edu, robert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='frank@yale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='edu, tom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='mccoy@princeton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='edu Abstract When acquiring syntax, children consistently choose hierarchical rules over competing non- hierarchical possibilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Is this preference due to a learning bias for hierarchical struc- ture, or due to more general biases that in- teract with hierarchical cues in children’s lin- guistic input?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We explore these possibili- ties by training LSTMs and Transformers— two types of neural networks without a hi- erarchical bias—on data similar in quantity and content to children’s linguistic input: text from the CHILDES corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We then evaluate what these models have learned about English yes/no questions, a phenomenon for which hi- erarchical structure is crucial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We find that, though they perform well at capturing the sur- face statistics of child-directed speech (as mea- sured by perplexity), both model types general- ize in a way more consistent with an incorrect linear rule than the correct hierarchical rule.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' These results suggest that human-like general- ization from text alone requires stronger biases than the general sequence-processing biases of standard neural network architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1 Introduction Syntax is driven by hierarchical structure, yet we typically encounter sentences as linear sequences of words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' How do children come to recognize the hierarchical nature of the languages they acquire?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Some argue that humans must have a hierarchical inductive bias—an innate predisposition for hierar- chical structure (Chomsky, 1965, 1980).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' An alter- native view (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', Lewis and Elman, 2001) is that no such bias is necessary: there may be clear evi- dence for hierarchical structure in children’s input, so that children would choose hierarchical rules even without a hierarchical bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' ∗ Work done while at Johns Hopkins University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' At first blush, recent work in natural language processing (NLP) may seem to indicate that no hier- archical bias is necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Neural networks trained on naturally-occurring text perform impressively on syntactic evaluations even though they have no explicit syntactic structure built into them (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', Gu- lordava et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Wilcox et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Warstadt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' However, these results do not pro- vide strong evidence about the learning biases re- quired to learn language from the data available to humans because these models receive very dif- ferent training data than humans do (Warstadt and Bowman, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' First, NLP models are typically trained on far more data than children receive, so models have more opportunities to encounter rare syntactic structures (Linzen, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Second, most training sets in NLP are built from Internet text (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', Wikipedia), which differs qualitatively from the utterances that children typically hear;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', sen- tences in Wikipedia are on average 25 words long (Yasseri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2012), compared to 5 words for sentences in the North American English subset of the CHILDES corpus of child-directed speech (MacWhinney, 2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In this work, to evaluate if neural networks with- out a hierarchical bias generalize like children do, we train models on text1 comparable to the sen- tences in children’s linguistic input: English data from CHILDES.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We then analyze what they have learned about the relationship between declarative sentences, such as (1a), and their corresponding yes/no questions, such as (1b): (1) a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Those are your checkers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Are those your checkers?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Crucially, nearly all naturally-occurring yes/no questions are consistent with two rules: one based 1Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 discusses other input types (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', visual input).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='11462v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='CL] 26 Jan 2023 on hierarchical structure (2), and one based on lin- ear order (3):2,3 (2) HIERARCHICALQ: The auxiliary at the start of a yes/no question corresponds to the main auxiliary of the corresponding declarative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (3) LINEARQ: The auxiliary at the start of a yes/no question corresponds to the first auxil- iary of the corresponding declarative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Despite the scarcity of evidence disambiguating these rules, children reliably favor HIERARCHI- CALQ (Crain and Nakayama, 1987), albeit with occasional errors consistent with LINEARQ (Am- bridge et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Yes/no questions thus are a prime candidate for an aspect of English syntax for which human-like generalization requires a hi- erarchical bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We evaluate yes/no question per- formance in LSTMs and Transformers, two neural- network architectures that have no inherent hierar- chical inductive bias (McCoy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Petty and Frank, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' These architectures employ different computational mechanisms, so consistent results across both would indicate that our results are not due to idiosyncrasies of one particular architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To investigate if models generalize more con- sistently with the hierarchical or linear rule, we evaluate them on cases where the rules make dif- ferent predictions, such as (4): under HIERARCHI- CALQ, the question that corresponds to (4a) is (4b), whereas under LINEARQ it is (4c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (4) a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The boy who has talked can read.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Can the boy who has talked read?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' *Has the boy who talked can read?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We find that across several ways of framing the learning task, models fail to learn HIERARCHI- CALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Instead, they generalize in ways that de- pend on linear order and on the identities of spe- cific words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' These results suggest that children’s training data, if taken to be words alone, may not contain enough hierarchical cues to encourage hier- archical generalization in a learner without a hierar- chical bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, explaining human acquisition of syntax may require postulating that humans have stronger inductive biases than those of LSTMs and 2In past work these rules have been framed as transforma- tions named MOVE-FIRST and MOVE-MAIN (McCoy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We instead follow Berwick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2011) and frame the child’s knowledge as a relationship between sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 3Though these two rules are the most prominent in prior literature, other rules are possible;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' see Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Transformers, or that information other than word sequences plays a crucial role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='4 2 Background Though HIERARCHICALQ and LINEARQ often make the same predictions, the evidence in chil- dren’s input may still favor HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The most straightforward evidence would be ut- terances that directly disambiguate the rules, such as (4b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Pullum and Scholz (2002) show that disam- biguating examples appear in the Wall Street Jour- nal, in literature, and arguably in child-directed speech, but direct evidence may still be too rare to robustly support HIERARCHICALQ (Legate and Yang, 2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Nonetheless, children might con- clude that yes/no questions obey HIERARCHI- CALQ rather than LINEARQ based on indirect evidence—evidence that other syntactic phenom- ena are hierarchical (Mulligan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To test if the cues favoring HIERARCHICALQ render a hierarchical bias unnecessary, we study how well non-hierarchically-biased models acquire English yes/no questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Several prior papers have used this approach, but their training data differed from children’s input in important ways: some used synthetic datasets (Lewis and Elman, 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Frank and Mathis, 2007;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Clark and Eyraud, 2007;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' McCoy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020), others used massive Internet corpora (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Warstadt and Bowman, 2020), and those that used child-directed speech simpli- fied the data by replacing each word with its part of speech (Perfors et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Bod et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We used training data closer to children’s input, namely sentences from CHILDES with word iden- tities preserved, rather than being converted to parts of speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Two other recent works have also trained neural networks on CHILDES data (Pannitto and Herbelot, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Huebner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2021), but neither investigated yes/no questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' One particularly important reason for training models on CHILDES is that, in prior work, differ- ent types of training data have yielded diverging results: Recent models trained on synthetic data failed to properly acquire yes/no questions (McCoy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Petty and Frank, 2021), whereas ones trained on large Internet corpora scored well on evaluations of yes/no questions (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Warstadt and Bowman, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Given these differ- ing results, it is not clear from past work how these 4Our datasets and models will be uploaded online soon to facilitate further research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' models would generalize when faced with the type of data that children receive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 3 Overview of Experimental Setup We evaluated models on yes/no questions in two ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' First, we used relative acceptability judg- ments (Experiment 1): We trained neural networks on the task of language modeling (predicting the next word at every point in the sentence) and evalu- ated whether they assigned a higher probability to sentences consistent with LINEARQ or HIERAR- CHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our second approach was based on text generation (Experiment 2): We trained networks to take in a declarative sentence and output the corresponding question, and tested whether they generalized in a way more consistent with LIN- EARQ or HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Under both framings, we trained models on data from CHILDES and evaluated them on targeted datasets constructed to differentiate LINEARQ and HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 4 Experiment 1: Relative Acceptability 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='1 Dataset To train models on data as similar as possible to the sentences children receive, we extracted data from CHILDES (MacWhinney, 2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We used the North American English portion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We wished to replicate children’s input, so we excluded the children’s own utterances, leaving a 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='6-million- word corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We allocated 90% of the data to training, 5% to validation, and 5% to testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We replaced words that appeared two or fewer times in the training set with , giving a replacement rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='3%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' See Appendix A for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2 Task: Next-Word Prediction We trained models on next-word prediction, also known as language modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We chose this task for two reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' First, it is clear empirically that next-word prediction can teach neural networks a substantial amount about syntax (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Second, it is plausible that humans per- form some version of next-word prediction during sentence processing (Altmann and Kamide, 1999;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Hale, 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Levy, 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Kutas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2011) and that such prediction may play a role in acquisition (Elman, 1991).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, while next-word prediction is certainly not the only goal of human language learners, we view this task as a reasonable first step in emulating human language acquisition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='3 Architectures We used two neural network architectures: LSTMs (Hochreiter and Schmidhuber, 1997) and Trans- formers (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We chose these models for two reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' First, they have been the most successful architectures in NLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, we have reason to believe that, of the types of low-bias models invented, these two are the ones most likely to discover linguistic regularities in our CHILDES training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Second, the two architectures pro- cess sequences very differently (via recurrence vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' via attention).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, if both generalize similarly, we would have evidence that what was learned is strongly evidenced in the data, rather than due to a quirk of one particular architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For our LSTMs, we used 2 layers, a hidden and embedding size of 800, a batch size of 20, a dropout rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='4, and a learning rate of 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For our Trans- formers, the corresponding values were 4, 800, 10, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2, and 5, and we used 4 attention heads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We chose these values based on a hyperparameter search de- scribed in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' All following results are av- eraged across 10 runs with different random seeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='4 Results: Language Model Quality Before testing models on questions, we used per- plexity to evaluate how well they captured the basic structure of their training domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' As a baseline, we used a 5-gram model with Kneser-Ney smooth- ing (Kneser and Ney, 1995) trained with KenLM (Heafield, 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The test set perplexity for the 5-gram baseline was 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='37, while the average test set perplexity for the LSTMs and Transformers was 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='05 and 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='69, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For perplexity, lower is better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, both neural network types outperformed the strong baseline of a smoothed 5-gram model, showing that they performed well at capturing the basic statistics of their training domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 General Syntactic Evaluation As an additional way to check the validity of our setup, we evaluated our models on the Zorro dataset (Huebner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2021), which is based on BLiMP (Warstadt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Zorro contains 24 evalu- ations, each of which targets one syntactic phe- nomenon (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', subject-verb agreement) and in- volves sentence pairs for which one sentence is grammatical, and the other is minimally different 5For an intuitive illustration of our model quality, see the sample text generated by them in Appendix H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' but ungrammatical (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', by violating subject verb agreement).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' A model is said to get a sentence pair correct if it assigns a higher probability to the grammatical sentence than the ungrammatical one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Huebner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2021) showed that Transformers trained on CHILDES data can perform well on many of the Zorro categories, so if our setup is sound, our own models should also perform well on Zorro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' See Appendix D for full results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For each syntac- tic phenomenon, most model re-runs scored above 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='9, though at least one scored near the chance level of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For each re-run of each architecture there is at least one phenomenon for which the model scores over 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='97, and many models score 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='00 on some phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, all models score well on at least some syntactic evaluations, attaining results comparable to those of Huebner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2021) and providing additional support for the validity of our setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We now test whether these models have also successfully learned the specific phenomenon that we focus on, yes/no questions—a phenomenon not included in the Zorro dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='6 Yes/No Questions Evaluation Dataset: Forced-Choice Acceptabil- ity Judgments As a first way to test whether our models have learned HIERARCHICALQ, we eval- uate whether they assign higher probabilities to sentences consistent with HIERARCHICALQ than to minimally different sentences that are ungram- matical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For this purpose, we create an evaluation dataset containing groups of 6 questions, each cre- ated by starting with a declarative sentence, such as (5), and then deleting the first, main, or neither auxiliary, and inserting the first or main auxiliary at the front of the sentence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='6 For instance, in (6b), the first auxiliary has been preposed, and the main auxiliary has been deleted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (5) The dog who has seen a boy did try.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (6) a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Has the dog who seen a boy did try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Has the dog who has seen a boy try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Has the dog who has seen a boy did try ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Did the dog who seen a boy did try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Did the dog who has seen a boy try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Did the dog who has seen a boy did try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 6It would be possible to also use a ‘prepose other’ category, where an auxiliary not in the input is inserted (McCoy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We excluded this category because using it would raise complications about which ‘other’ auxiliary to choose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Within each group, we evaluate which question the model assigned the highest probability to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' If a model has correctly learned HIERARCHICALQ, it should assign the highest probability to the question consistent with this rule, such as (6e).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Several past papers about yes/no questions have used the same general approach (Lewis and El- man, 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Reali and Christiansen, 2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' How- ever, these papers considered only pairs of sen- tences, whereas we consider groups of 6 to allow for a wider range of possible generalizations that a model might have learned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To generate the declaratives from which we formed groups of 6 questions, we used the context- free grammar (CFG) in Appendix F, which has a vo- cabulary selected from the most common words in CHILDES.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Each declarative generated by the CFG (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', (5)) contains two auxiliary verbs: one before the sentence’s main verb and one inside a relative clause modifying the subject.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' One potential prob- lem is that some questions are consistent with both HIERARCHICALQ and LINEARQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For instance, (7a) can be formed from (7b) with the HIERARCHI- CALQ-consistent steps PREPOSE-MAIN,DELETE- MAIN, or from (7c) with the LINEARQ-consistent steps PREPOSE-FIRST,DELETE-MAIN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (7) a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Did the boy who did see the person laugh?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The boy who did see the person did laugh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The boy who did see the person can laugh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To avoid this problem, we required that the aux- iliary before the main verb must select for a dif- ferent verb inflection than the one in the relative clause.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For instance in (5), did selects for the verb’s bare form, while has selects for the past participle form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, the auxiliary at the start of the question could only correspond to whichever auxiliary in the declarative has the same selectional properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='7 Results: Relative Question Acceptability For each sentence group, we used per-word perplex- ity to see which of the 6 candidates the models scored most highly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='8 For both LSTMs and Trans- formers, the correct category (PREPOSE MAIN, DELETE MAIN) was the second-rarest choice, and 7A model could succeed on this dataset with a rule that relates the auxiliary at the start of a question with the last auxiliary in the declarative form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Since our models fail on this dataset, this consideration is not relevant here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 8We also explored evaluation of the models with a more complex measure called SLOR where we additionally nor- malized scores by word frequency (Pauls and Klein, 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Both metrics produced qualitatively similar results, so we only report the simpler metric here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' See Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Prepose First Prepose Main Delete First Delete Main Delete none LSTM Transformer LSTM Transformer 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 Preference for question type Declarative sentence: The person who has seen this boy did try.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Has the person who seen this boy did try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Did the person who seen this boy did try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Has the person who has seen this boy try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Did the person who has seen this boy try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Has the person who has seen this boy did try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Did the person who has seen this boy did try?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Figure 1: The question types that models prefer when offered a choice between 6 questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' These 6 ques- tions are formed by modifying a declarative with a rel- ative clause on the subject according to ‘prepose’ and ‘delete’ rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The correct category is PREPOSE MAIN, DELETE MAIN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Within each architecture, the propor- tions across all 6 question types necessarily sum to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Each bar shows the average across 10 model re-runs, with single-standard-deviation error bars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' the most frequent preference was for PREPOSE FIRST, DELETE MAIN, a category that is only par- tially correct because it references linear order in addition to hierarchical structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, neither model displays preferences con- sistent with the correct, fully-hierarchical gener- alization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The two model types showed similar scores, which may mean that these results are largely driven by the statistics of the training data that both models share, rather than the models’ dif- fering inductive biases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' One of the incorrect categories—PREPOSE MAIN, DELETE NONE, such as (6f)—only re- quires reference to hierarchical structure, so it could be said to capture the hierarchical nature of yes/no questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Nonetheless, this category was also relatively rare: combining the two fully hier- archical possibilities (PREPOSE MAIN, DELETE MAIN and PREPOSE MAIN, DELETE NONE) ac- counts for only 26% of LSTM preferences and 27% of Transformer preferences, meaning that both models over 70% of the time favored a sentence generated at least partially based on linear order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' There are two likely reasons for why our models performed so poorly on yes-no questions when they performed well on many of the phenomena in the Zorro dataset (Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' First, yes/no questions may simply be harder to learn than the other phe- nomena;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' indeed, yes/no questions are often singled out as being likely to pose difficulties for a general- purpose learner (Section 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Alternatively, it might be that the six-way evaluation we used for yes/no questions is stricter than the binary judgments used for the Zorro dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 5 Experiment 2: Question Formation The previous experiment was designed to operate entirely in the next-word-prediction paradigm, mo- tivated by arguments from past literature about the strength and relative ecological validity of next-word-prediction as a training objective (see Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' However, one of this setup’s shortcomings is that HIERARCHICALQ describes correspondences between questions and declara- tives, but Experiment 1 focused on questions alone, with no consideration of declaratives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In this second experiment, to better capture that HIERARCHICALQ is defined over sentence pairs, we trained models on a sentence-pair task: trans- forming a declarative into a question (McCoy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For instance, given the child did learn the model must produce did the child learn ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We evaluated models in two ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' First, we checked if the models’ predictions fully matched the correct questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' This full-sentence evaluation is demanding, and models might fail this evalua- tion for reasons unrelated to our core hypotheses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For instance, given the child did learn the model might produce did the baby learn, which would be marked as incorrect, even though this lexical error is not relevant to HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' As a metric that is less demanding and that also more directly targets HIERARCHICALQ, we mea- sured if the first word of the output question corre- sponded to the first or main auxiliary of the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Critically, LINEARQ and HIERARCHICALQ make different predictions for the first word of a question so long as the two auxiliaries are distinct: see (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Because this framing lets the model freely generate its output (instead of choosing one option from a pre-specified set), we allow for the possibility that the rule learned by models may not be identical to any of our manually-generated hypotheses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Solely training models to perform this transfor- mation involves the implicit assumption that, when children acquire English yes/no questions, the only evidence they leverage is English yes/no questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' However, other types of sentences may also pro- vide useful evidence (Pearl and Mis, 2016): e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', wh-questions also illustrate subject-auxiliary in- version (Pullum and Scholz, 2002), while, more generally, many types of sentences could provide evidence that the syntax as a whole is hierarchical (Perfors et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To explore this possibility, we compared a condition in which models were only trained to perform question formation (the QUESTION FORMATION condition) to another in which models were first pre-trained on next-word prediction with the exact same setup as in Experi- ment 1 before being further trained to perform ques- tion formation (the NEXT-WORD PREDICTION + QUESTION FORMATION condition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='1 Dataset Training Set Our question formation dataset con- sisted of the yes/no questions in the CHILDES Treebank (Pearl and Sprouse, 2013a,b), a parsed subset of CHILDES containing 189,359 sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We used these parses to extract all yes/no ques- tions from the CHILDES Treebank and derive their corresponding declarative forms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The resulting declarative was concatenated with the question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' An example declarative/question pair is: (8) you can spell your name .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' can you spell your name ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The training set consisted of 10,870 declara- tive/question pairs, the validation set 1,360 pairs, and the test set 1,358 pairs (we will call this test set the randomly-partitioned test set to distinguish it from two other evaluation sets discussed below).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We trained models to perform next-word prediction on such concatenated sentence pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The first-word accuracy of the trained model was then computed based on the model’s predic- tion for the word after the period in each test exam- ple, while the full-sentence accuracy was computed based on its predictions for all tokens after the pe- riod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' All questions in the randomly-partitioned test set were withheld from both the question-formation training set and the next-word-prediction training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, models had not seen these test examples in their training, even in the NEXT-WORD PRE- DICTION + QUESTION FORMATION condition in which they were trained on both tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Evaluation Sets In addition to the randomly- partitioned test set, we used CFGs to generate two targeted evaluation sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' As in Experiment 1, we se- lected the CFGs’ vocabulary from common words in our CHILDES data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In sentences generated from the first CFG, the sentence’s first auxiliary was also its main auxiliary, so LINEARQ and HIERARCHI- CALQ make the same predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (8) exemplifies the type of declarative-question pair in this dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We call this dataset FIRST-AUX = MAIN-AUX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For sentences generated by the second CFG, the main auxiliary was the second auxiliary in the sentence;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' thus, these examples disambiguate LINEARQ and HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Example (9) is a declarative- question pair from this evaluation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (9) a boy who is playing can try .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' can a boy who is playing try ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We call this dataset FIRST-AUX ̸= MAIN-AUX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' See Appendix F for the CFGs used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We sampled 10,000 declarative sentences from these grammars and transformed them into questions according to HIERARCHICALQ to create our evaluation sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2 Results Randomly-Partitioned Test Set The LSTMs and Transformers in the QUESTION FORMA- TION condition performed well on the randomly- partitioned test set, with a full-question accuracy of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='68 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='014 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='87 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='005 (averaged across 10 reruns with margins indicating one standard de- viation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The models in the NEXT-WORD PRE- DICTION + QUESTION FORMATION condition per- formed similarly well, with a full-question accu- racy of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='66 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='008 for the LSTMs and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='93 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='004 for the Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For both model types, the first-word accuracy for the question was nearly 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='00 across re-runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We suspect that Transform- ers have a stronger full-question accuracy because producing the question requires copying all words from the declarative (but in a different order).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Copy- ing is likely easy for Transformers because they can attend to specific words in the prior context, while our LSTMs must compress the entire context into a fixed-size vector, which may degrade the individual word representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Because both model types achieved near-perfect performance on the crucial first-word accuracy metric, we conclude that our models have successfully learned how to handle the types of declarative/question pairs that we ex- tracted from the CHILDES Treebank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Targeted Evaluation Sets On our two targeted evaluation sets, models almost never produced the complete question correctly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Turning to the more lenient measure of first-word accuracy, for exam- ples on which LINEARQ and HIERARCHICALQ predict the same first output word (FIRST-AUX = MAIN-AUX), the Transformer trained only on ques- tion formation performed strongly, while the Trans- LSTM Transformer First-Aux = Main-Aux First-Aux ≠ Main-Aux HierarchicalQ & LinearQ HierarchicalQ Only LinearQ Only HierarchicalQ & LinearQ HierarchicalQ Only LinearQ Only 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='00 Consistency with rule(s), based on first word of question Condition Question Formation Next-Word Prediction + Question Formation Figure 2: Proportion of model-produced questions that were consistent with the linear rule LINEARQ and/or the hierarchical rule HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In the FIRST- AUX = MAIN-AUX dataset, the first auxiliary is the main auxiliary, so both LINEARQ and HIERARCHI- CALQ produce the correct question string.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The FIRST- AUX ̸= MAIN-AUX dataset disambiguates the two rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Each bar shows the average across 10 model re- runs, with error bars showing one standard deviation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' former trained on both tasks, and both LSTMs, performed reasonably well (Figure 2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' note mod- els could choose any word in their vocabulary to begin the output, so chance performance is near 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='00).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For the crucial cases that disambiguate the two rules (FIRST-AUX ̸= MAIN-AUX), both mod- els in both conditions performed more consistently with LINEARQ than HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Training on next-word prediction before question formation had inconsistent effects: it modestly increased the likelihood of hierarchical generalization in LSTMs, yet it decreased that likelihood in Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Lexical Specificity In Appendix G, we further break down the FIRST-AUX ̸= MAIN-AUX results based the auxiliaries’ identity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The generalization pattern varied considerably across auxiliary pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For some auxiliary pairs, the auxiliary chosen to begin the question was usually neither auxiliary in the input (Figure 3, left facet).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For other pairs, models usually chose the first auxiliary, regardless of lexical identity (Figure 3, middle facet).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Finally, for some pairs, the auxiliary chosen was usually the same one, regardless of whether it was the first or main auxiliary (Figure 3, right facet).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Generalization based on lexical identity is rarely considered in past discussions of English yes/no question acquisition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Of the papers on this phe- nomenon (see Clark and Lappin (2010), Lasnik and Lidz (2017), and Pearl (2021) for overviews), the only one to our knowledge that discusses lexi- have and has can and do have and did Move−first−aux Move−main−aux Move−have Move−has Move−first−aux Move−main−aux Move−can Move−do Move−first−aux Move−main−aux Move−have Move−did 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 First word behavior consistent with rule Comparison First−vs−main Aux−vs−Aux Figure 3: Lexical specificity in model behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Each facet considers only the evaluation examples contain- ing the two auxiliaries in the facet heading;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', the can and do facet includes, for example, the inputs the children who can play do learn and the children who do play can learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The bars show the proportion of model predictions for the first word of the output that are consistent with four potential movement rules, aver- aged across 10 model re-runs and with error bars show- ing one standard deviation above and below the mean.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' This plot only shows an illustrative subset of auxiliary pairs for one model type (Transformers in the NEXT- WORD PREDICTION + QUESTION FORMATION con- dition);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' see Appendix G for the full results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' cal specificity is Frank and Mathis (2007), which studied models trained on synthetic data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our re- sults highlight the importance of testing for a broad range of generalizations: Lexically-specific hy- potheses appear attractive for our low-bias learners, so an account of what biases can yield human-like learning should rule out these lexically-specific hy- potheses along with linear ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 6 Discussion We have found that, when trained on child-directed speech, two types of standard neural networks per- formed reasonably well at capturing the statistical properties of the dataset, yet their handling of En- glish yes/no questions was more consistent with a linear rule LINEARQ than the correct hierarchi- cal rule HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' These results support the hypothesis that a learner requires a hierarchical bias to consistently learn hierarchical rules when learning from the linguistic data children receive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='1 Takeaways for LSTMs and Transformers When trained on massive corpora, LSTMs and Transformers perform impressively on some syn- tactic evaluations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Based on such results, it is tempt- ing to conclude that the general-purpose biases of these architectures suffice to yield human-like syn- tax acquisition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our results caution against this interpretation: When we trained the same architec- tures on data more similar to children’s input, they failed to learn the structure of English yes/no ques- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, at least when learning from text alone, LSTMs and Transformers do not display human- like language learning—they do not generalize as humans do from the data that humans receive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2 Takeaways for the Poverty of the Stimulus Debate Below we specify four possible positions in the poverty-of-the-stimulus debate about the adequacy of children’s input for inducing hierarchical rules in low-bias learners, arranged from assuming the most limited to the most expansive innate component: (10) Any inductive biases: Any learner trained on CHILDES will generalize like humans do.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (11) Any inductive biases that enable in- distribution learning: Any learner that cap- tures the statistical patterns of the training dis- tribution will generalize to HIERARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (12) Some non-hierarchical inductive biases: Some general-purpose learners will generalize as humans do, but others will not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (13) Only a hierarchical inductive bias: No general-purpose learners will generalize as humans do: hierarchical biases are necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Position (10) is clearly false: many learners can- not learn certain aspects of syntax, no matter their training data (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', bigram models cannot capture long-distance dependencies).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our work shows that position (11) is also false: Though our models per- formed well on the in-distribution test sets of Exper- iments 1 and 2, they did not generalize in human- like ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' This leaves positions (12) and (13), which our existing results cannot differentiate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' It is possible that only learners with hierarchical induc- tive biases can demonstrate human-like language learning (position (13)), but also that some learners without this bias can succeed (position (12))—just not the learners we tested.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For further discussion of how computational modeling can bear on learn- ability arguments, see Wilcox et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' One potential solution supporting position (12) would be that learners leverage the hierarchical structure of some syntactic phenomenon to help conclude that other, impoverished phenomena are hierarchical (Perfors et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Mulligan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' However, our results from Experiment 2 show that giving learners access to a wider range of phenomena does not automatically improve hi- erarchical generalization: Models’ performance on question formation was not substantially improved (and in some cases was even harmed) when they were trained not just on question formation but also on next-word prediction on the entire CHILDES corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, although training on text that con- tains many linguistic phenomena can give mod- els a hierarchical inductive bias when the training is done over large Internet corpora (Warstadt and Bowman, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Mueller et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2022), our results provide evidence that this conclusion does not ex- tend to models trained on child-directed speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Though both (12) and (13) remain as possibil- ities, we believe that our results more strongly support (13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Of all currently available general- purpose learners, LSTMs and Transformers are the best at modeling the probabilistic structure of lin- guistic data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Therefore, if child-directed speech contains clear evidence for the hierarchical nature of yes/no questions—evidence so clear that at least some general-purpose learners could recognize it— it is likely that LSTMs and Transformers would be among the set of general-purpose learners that could use this evidence to make hierarchical gener- alizations in our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The fact that these architectures instead predominantly favored linear generalizations therefore supports position (13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='3 How to test for HIERARCHICALQ We have argued that an ideal simulation of the acquisition of English yes/no questions would have the following properties: (14) The training data should be similar to chil- dren’s linguistic input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (15) The training task should be ecologically valid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (16) The evaluation method should focus on corre- spondences between pairs of sentences rather than the acceptability of individual sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Property (14) motivated our use of text from CHILDES as the training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' We are not aware of a single experimental setup that fully satisfies both Property (15) and Property (16), so we instead used two experiments, each one focusing on one property at the cost of satisfying the other one less well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Experiment 1 works entirely in the context of the relatively ecologically valid task of next- word prediction, motivated by Property (15), but its evaluation is only based on the acceptability of in- dividual sentences, failing to satisfy Property (16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Experiment 2 fully satisfies Property (16) by using an evaluation based on sentence pairs, at the cost of including a less ecologically-valid training compo- nent based on sentence transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Both ex- periments yielded qualitatively similar conclusions (failure of models to learn HIERARCHICALQ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='4 Quantity of Training Data The size of our training set was plausibly within the range from which children can acquire HIER- ARCHICALQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Crain and Nakayama (1987) found that children between ages 3 and 5 behaved much more consistently with HIERARCHICALQ than LINEARQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Though these children made many er- rors, their errors were usually compatible with a hierarchical rule (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', PREPOSE MAIN, DELETE NONE errors: see Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' By age 3, Ameri- can children receive approximately 10 to 33 mil- lion words of input (Hart and Risley, 1995), and the 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 million words of our training set is close to the lower end of that range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, it is reason- able to suppose that a learner that generalizes as children do would favor HIERARCHICALQ after being trained on our training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our models, in contrast, regularly preferred sentences generated in ways based on linear order (Figures 1 and 2), a category of error that is very rare in children (Crain and Nakayama, 1987;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Ambridge et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In order to give our models the strongest chance of generalizing correctly, it would have been ideal to provide a quantity of data closer to 33 million words, the high end of Hart and Risley’s range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our data source did not contain enough text to make this possible, but future work could investigate ways to augment the data using other sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 Type of Training Data Our training set was both qualitatively and quanti- tatively closer to children’s input than the massive Internet corpora standardly used to train models in NLP (Linzen, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' This difference is important: Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2019), Warstadt and Bowman (2020), and Mueller et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2022) all found evidence that models trained on large Internet corpora performed well on yes/no questions evaluations, whereas our models trained on CHILDES performed poorly— though we cannot be certain the differences in re- sults are solely due to differences in the training data, since these prior papers used different model architectures, training tasks, and evaluation setups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Though our training data are more similar to children’s input than massive Internet corpora are, differences remain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our experiments omit several aspects of a child’s experience that might help them acquire syntax, such as prosody (Morgan and De- muth, 1996), visual information (Shi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2019), and meaning (Fitz and Chang, 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Abend et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2017), all of which might correlate with syntac- tic structure and thus provide cues to the correct hierarchical generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' On the other hand, our dataset might present an easier learning sce- nario than children are faced with, because chil- dren must learn to segment the speech stream into words (Lakhotia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2021), while our models do not need to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Further, though real-world grounding could provide helpful information, learners might struggle to leverage this information due to diffi- culty determining what is being discussed in the physical world (Gleitman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 7 Conclusion In this work, we trained two types of neural net- works (LSTMs and Transformers) on sentences of the types available to children and then analyzed what they had learned about English yes/no ques- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Across several evaluation paradigms, these models failed to generalize in human-like ways: Humans display hierarchical generalization, while the models’ generalization was instead based on linear order and individual words’ identities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our results support the hypothesis that human-like lin- guistic generalization requires biases stronger than those of LSTMs and Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Future work should investigate what inductive biases enable suc- cessful generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' One approach would be to test architectures with built-in hierarchical struc- ture;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' past work has shown that such architectures have a hierarchical bias (McCoy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020) and generalize better on the hierarchical phenomenon of subject-verb agreement (Kuncoro et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Lepori et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', 2020), so they may also generalize bet- ter on English yes/no questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' A final direction would be to expand the input beyond words alone so that learners can leverage hierarchical structure that is present in other modalities, such as hierar- chical structure in visual scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Ethics Statement Use of human data: While we did not collect any new human data ourselves, many of our anal- yses involved the use of prior datasets within the CHILDES database.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' All of these datasets were collected in accordance with IRB policies at the institutions of the data collectors, and all followed standard practices in obtaining informed consent and deidentifying data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='9 Risks and limitations: The main risk of our pro- posed analyses is that future work using the same analyses might draw overly strong conclusions based on increased model performance, leading to overestimates of model strength.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Such overesti- mates are an issue because they can lead users to place more trust in a model than is warranted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To clarify, we view strong performance on our evaluation datasets as necessary but not sufficient to demonstrate human-like learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, if models perform poorly on our datasets (as the models we evaluated did), then we have strong reason to con- clude that models are not learning in human-like ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' If future models perform better, such results would be consistent with human-like learning but would not conclusively establish that models learn as humans do, as they might instead be using some shallow heuristic that is not controlled for in our datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In other words, a criterion that is neces- sary but not sufficient facilitates strong conclusions about failure but does not facilitate strong conclu- sions about success.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' If future papers are faced with models that are more successful, such papers would ideally supplement results based on our datasets with analyses of models’ internal strategies in order to more conclusively establish that what they have learned is not a spurious heuristic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' References Omri Abend, Tom Kwiatkowski, Nathaniel J Smith, Sharon Goldwater, and Mark Steedman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Boot- strapping language acquisition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cognition, 164:116– 143.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Gerry TM Altmann and Yuki Kamide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Incremen- tal interpretation at verbs: Restricting the domain of subsequent reference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cognition, 73(3):247–264.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Ben Ambridge, Caroline F Rowland, and Julian M Pine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Is structure dependence an innate con- straint?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' New experimental evidence from children’s complex-question production.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cognitive Science, 32(1):222–255.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Robert Berwick, Paul Pietroski, Beracah Yankama, and Noam Chomsky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Poverty of the stimulus re- visited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cognitive science, 35:1207–42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 9https://talkbank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='org/share/irb/ Rens Bod, Margaux Smets, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Empiricist so- lutions to nativist problems using tree-substitution grammars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Workshop on Computational Models of Language Acquisition and Loss: EACL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Noam Chomsky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1965.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Aspects of the Theory of Syntax, 50 edition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The MIT Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Noam Chomsky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1980.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Rules and representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Columbia University Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Alexander Clark and Rémi Eyraud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Polynomial identification in the limit of substitutable context- free languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Journal of Machine Learning Re- search, 8(8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Alexander Clark and Shalom Lappin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Linguis- tic Nativism and the Poverty of the Stimulus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' John Wiley & Sons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Stephen Crain and Mineharu Nakayama.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1987.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Struc- ture dependence in grammar formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Language, pages 522–543.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Jeffrey L Elman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1991.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Distributed representations, simple recurrent networks, and grammatical struc- ture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Machine learning, 7(2):195–225.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Hartmut Fitz and Franklin Chang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Meaningful questions: The acquisition of auxiliary inversion in a connectionist model of sentence production.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cog- nition, 166:225–250.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Robert Frank and Donald Mathis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Transforma- tional networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Models of Human Language Acqui- sition, 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Lila R Gleitman, Kimberly Cassidy, Rebecca Nappa, Anna Papafragou, and John C Trueswell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Hard words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Language learning and development, 1(1):23–64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Colorless green recurrent networks dream hierarchically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' John Hale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' A probabilistic Earley parser as a psy- cholinguistic model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Second Meeting of the North American Chapter of the Association for Computa- tional Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Betty Hart and Todd R Risley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Meaningful differ- ences in the everyday experience of young American children.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Paul H Brookes Publishing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Kenneth Heafield.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' KenLM: Faster and smaller language model queries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187–197, Edinburgh, Scotland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Sepp Hochreiter and Jürgen Schmidhuber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Long short-term memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Neural computation, 9(8):1735–1780.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger Levy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' A systematic assessment of syntactic generalization in neural language mod- els.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1725–1744, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Compu- tational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Philip A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Huebner, Elior Sulem, Cynthia Fisher, and Dan Roth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' BabyBERTa: Learning more gram- mar with small-scale child-directed language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of CoNLL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Xuân-Nga Cao Kam, Iglika Stoyneshka, Lidiya Torny- ova, Janet D Fodor, and William G Sakas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Bi- grams and the richness of the stimulus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cognitive Science, 32(4):771–787.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Reinhard Kneser and Hermann Ney.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Improved backing-off for m-gram language modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1995 International Conference on Acoustics, Speech, and Signal Processing, 1:181–184 vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yo- gatama, Stephen Clark, and Phil Blunsom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' LSTMs can learn syntax-sensitive dependencies well, but modeling structure makes them better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1426–1436, Melbourne, Aus- tralia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Marta Kutas, Katherine A DeLong, and Nathaniel J Smith.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' A look around at what lies ahead: Pre- diction and predictability in language processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Predictions in the brain: Using our past to generate a future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Kushal Lakhotia, Eugene Kharitonov, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Benjamin Bolte, Tu-Anh Nguyen, Jade Copet, Alexei Baevski, Abdelrahman Mohamed, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' On generative spoken lan- guage modeling from raw audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Transactions of the Association for Computational Linguistics, 9:1336– 1354.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Howard Lasnik and Jeffrey L Lidz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The argu- ment from the poverty of the stimulus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The Oxford handbook of universal grammar, pages 221–248.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Julie Anne Legate and Charles D Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Em- pirical re-assessment of stimulus poverty arguments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The Linguistic Review, 19(1-2):151–162.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Michael Lepori, Tal Linzen, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thomas McCoy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Representations of syntax [MASK] useful: Effects of constituency and dependency structure in recursive LSTMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 3306–3316, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Roger Levy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Expectation-based syntactic com- prehension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cognition, 106(3):1126–1177.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' John Lewis and Jeffrey Elman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Learnability and the statistical structure of language: Poverty of stim- ulus arguments revisited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Proceedings of the 26th Annual Boston University Conference on Language Development, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Yongjie Lin, Yi Chern Tan, and Robert Frank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Open sesame: Getting inside BERT’s linguistic knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 2019 ACL Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 241–253, Florence, Italy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Tal Linzen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' How can we accelerate progress to- wards human-like linguistic generalization?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5210– 5217, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Lin- guistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Brian MacWhinney.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The CHILDES project: Tools for analyzing talk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Lawrence Erlbaum Asso- ciates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thomas McCoy, Robert Frank, and Tal Linzen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Revisiting the poverty of the stimulus: hier- archical generalization without a hierarchical bias in recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thomas McCoy, Robert Frank, and Tal Linzen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Does syntax need to grow on trees?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' sources of hierarchical inductive bias in sequence-to-sequence networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' James L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Morgan and Katherine Demuth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Signal to syntax: Bootstrapping from speech to grammar in early acquisition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Psychology Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Aaron Mueller, Robert Frank, Tal Linzen, Luheng Wang, and Sebastian Schuster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Coloring the blank slate: Pre-training imparts a hierarchical in- ductive bias to sequence-to-sequence models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Findings of the Association for Computational Lin- guistics: ACL 2022, pages 1352–1368, Dublin, Ire- land.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Karl Mulligan, Robert Frank, and Tal Linzen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Structure here, bias there: Hierarchical generaliza- tion by jointly learning syntactic transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the Society for Computation in Lin- guistics 2021, pages 125–135, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Ludovica Pannitto and Aurélie Herbelot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Recur- rent babbling: evaluating the acquisition of gram- mar from limited input data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 24th Conference on Computational Natural Lan- guage Learning, pages 165–176, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Associa- tion for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Adam Pauls and Dan Klein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Large-scale syntac- tic language modeling with treelets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 959–968, Jeju Island, Korea.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Lisa Pearl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Poverty of the stimulus without tears.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Language Learning and Development, pages 1–40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Lisa Pearl and Benjamin Mis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The role of in- direct positive evidence in syntactic acquisition: A look at anaphoric one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Language, 92:1–30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Lisa Pearl and Jon Sprouse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2013a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Computational models of acquisition for islands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Experimental syn- tax and islands effects, pages 109–131.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Lisa Pearl and Jon Sprouse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2013b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Syntactic islands and learning biases: Combining experimental syntax and computational modeling to investigate the lan- guage acquisition problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Language Acquisition, 20(1):23–68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Andrew Perfors, Josh Tenenbaum, and Terry Regier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The learnability of abstract syntactic princi- ples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cognition, 118:306–338.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Jackson Petty and Robert Frank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Trans- formers generalize linearly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' arXiv preprint arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='12036.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Geoffrey K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Pullum and Barbara C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Scholz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Em- pirical assessment of stimulus poverty arguments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The Linguistic Review, 18(1-2):9–50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Florencia Reali and Morten H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Christiansen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Un- covering the richness of the stimulus: Structure de- pendence and indirect statistical evidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cogni- tive Science, 29(6):1007–1028.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Haoyue Shi, Jiayuan Mao, Kevin Gimpel, and Karen Livescu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Visually grounded neural syntax ac- quisition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 1842–1861, Florence, Italy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Attention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Advances in neural information pro- cessing systems, pages 5998–6008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Alex Warstadt and Samuel R Bowman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Can neu- ral networks acquire a structural bias from raw lin- guistic data?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Proceedings of the 42nd Annual Con- ference of the Cognitive Science Society.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Alex Warstadt and Samuel R Bowman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' What artificial neural networks can tell us about human language acquisition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='07998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo- hananey, Wei Peng, Sheng-Fu Wang, and Samuel R Bowman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' BLiMP: The benchmark of lin- guistic minimal pairs for english.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Transactions of the Association for Computational Linguistics, 8:377–392.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Alex Warstadt, Yian Zhang, Xiaocheng Li, Haokun Liu, and Samuel R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Bowman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Learning which features matter: RoBERTa acquires a prefer- ence for linguistic generalizations (eventually).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 217–235, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computa- tional Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Ethan Wilcox, Richard Futrell, and Roger Levy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Using computational models to test syntactic learn- ability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' lingbuzz preprint lingbuzz/006327.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Ethan Wilcox, Roger Levy, Takashi Morita, and Richard Futrell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' What do RNN language models learn about filler–gap dependencies?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' In Proceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 211–221, Brussels, Belgium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Taha Yasseri, András Kornai, and János Kertész.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' A practical approach to language complexity: A Wikipedia case study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' PLoS ONE, 7(11):e48386.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' A CHILDES preprocessing details The train, test, and validation split kept each docu- ment in the corpora intact to allow for learning of context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Since a document roughly correspond to a single recording session, and the sentence order within each document was not randomized, the net- works could utilize cross sentence context while predicting the next word.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Generally, we kept the data as close to the actual input that the child receives as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' However, in some cases we modified tokenization to match the CHILDES Treebank, a syntactically parsed sub- set of the CHILDES corpora.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For instance, con- tractions were split, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' we replaced don’t with do n’t, The ages of the children vary by corpus, ranging from six months to twelve years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Almost 95% (49/52) of the corpora consist of transcriptions with children between one and six years of age.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Note that for Experiment 2, we used the same vo- cabulary as we used in Experiment 1, which means that the words that were not present in the Exper- iment 1’s vocabulary were replaced with tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The unprocessed CHILDES datasets were down- loaded in XML format from the online XML ver- sion10 of the CHILDES database (MacWhinney, 2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='11 A modified NLTK CHILDESCorpus- 10https://childes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='talkbank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='org/ data-xml/ 11https://childes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='talkbank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='org Reader12 was used to parse the XML into plain text for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The CHILDES dataset is licensed for use under a CC BY-NC-SA 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 license13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Under the terms of this license, the data can be freely used and adapted, as long as it is not used for commercial purposes and as long as attribution is provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='14 Our usage fits these criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Though CHILDES contains many corpora of many languages, we use only corpora from the North American English subset of CHILDES, which contains child-directed speech with many different North American children.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' See the CHILDES database for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' By the CHILDES rules for data citation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='15 re- search that relies on more than 6 of the corpora need only cite the overall database, not each indi- vidual corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' All the data on CHILDES must adhere to IRB guidelines,16 including a requirement for anonymity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The final dataset will be included in our GitHub repository, to be released soon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' This dataset is not intended for commercial use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' CHILDES corpora included The CHILDES corpora that we used were: Bates,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Bernstein,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Bliss,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Bloom70,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Bloom73,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Bohannon,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Braunwald,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Brent,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Brown,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Carterette,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Clark,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Cornell,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Demetras1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Demetras2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' EllisWeismer,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Evans,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Feldman,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Garvey,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Gathercole,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Gelman,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Gillam,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Gleason,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' HSLLD,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Haggerty,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Hall,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Higginson,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Kuczaj,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' MacWhin- ney,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' McCune,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' McMillan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Morisset,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' NH,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Nelson,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' NewEngland,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' NewmanRatner,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Normal,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' POLER,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Peters,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Post,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Rollins,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Sachs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Sawyer,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Snow,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Soder- strom,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Sprott,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Suppes,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Tardif,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Valian,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' VanHouten,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' VanKleeck,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Warren,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Weist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' B Hyperparameter Search and Model Implementation We conducted a hyperparameter search for each of the architectures we investigated (LSTMs and Transformers).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Our broad goal in this paper is to investigate the extent to which capturing the statis- tical properties of the CHILDES dataset naturally 12https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='nltk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='org/howto/childes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' html 13https://talkbank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='org/share/rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='html 14https://creativecommons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='org/licenses/ by-nc-sa/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0/ 15https://talkbank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='org/share/citation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' html 16https://talkbank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='org/share/irb/ leads a learner to capture the structure of yes/no questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Therefore, we sought to find the hyper- parameter settings that made models most effective at capturing the statistical properties of CHILDES data, a goal which we operationalized as finding the model with the lowest perplexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='1 Hyperparameter search LSTMs For LSTMs we explored the following hyper-parameters via a grid search for a total of 144 models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' layers: 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' hidden and embedding size: 200, 800 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' batch size: 20, 80 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' dropout rate: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='6 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' learning rate: 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0, 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0, 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' random seed: 3 per parameter combination, unique for each LSTM The LSTM model with the lowest perplexity on the validation set after training had 2 layers, a hidden and embedding size of 800, a batch size of 20, a dropout rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='4, and a learning rate of 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='17 A LSTM model with these hyperparameters has 37,620,294 parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Transformers For the Transformers we per- formed a hyperparameter sweep over the following hyper-parameters for a total of 84 models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' layers: 2, 4, 8, 16 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' context size: 50, 100, 500 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' hidden and embedding size: 200, 800, 1600 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' heads: 2, 4, 8, 16 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' batch size: 20, 80, 160 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' dropout rate: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='6 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' learning rate: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0, 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0, 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0, 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' random seed: 3 per parameter combination 17The hyperparameters we explored for the LSTMs were those of Gulordava et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2018), the code for which can be found at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='com/ facebookresearch/colorlessgreenRNNs LSTMs Prepose First Prepose Main Delete First 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='14 Delete Main 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='12 Delete None 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='14 Table 1: Numerical results for LSTMs’ preference for questions consistent with combinations of ‘prepose’ and ‘delete’ rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Within each architecture, the propor- tion preferences across all 6 question types necessarily sum to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The Transformer model with the lowest perplexities after training had 4 layers, a context size of 500, a hidden size of 800, a batch size of 10, 4 heads, a dropout rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2, and a learning rate of 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' A Transformer model with these parameters has 42,759,494 parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2 Comment on model size Although neural networks generally perform better as they increase in size, the best-performing models that we found were not the largest ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' This re- sult is consistent with the finding of Warstadt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2020b) that, for small training sets, smaller lan- guage models sometimes outperform larger ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, it is unlikely that scaling up models beyond the range we investigated would have yielded bet- ter CHILDES language models than the ones we trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='3 Implementation All models were implemented in Py- Torch by building on code from https: //github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='com/facebookresearch/ colorlessgreenRNNs and https: //github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='com/pytorch/examples/ tree/main/word_language_model, and trained using Nvidia k80 GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The final models will be included in our GitHub repository, which will be released soon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' These models are not intended for commercial use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' C PREPOSE-ONE&DELETE-ONE Full Results See Table 1 and Table 2 for these results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' See Table 1 and Table 2 for these results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='1 Results using SLOR See Table 3 and Table 4 for these results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Transformers Prepose First Prepose Main Delete First 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='16 Delete Main 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='31 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='06 Delete None 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='21 Table 2: Numerical results for Transformers’ prefer- ence for questions consistent with combinations of ‘pre- pose’ and ‘delete’ rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Within each architecture, the proportion preferences across all 6 question types nec- essarily sum to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' LSTMs Prepose First Prepose Main Delete First 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='14 Delete Main 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='80 Delete None 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='18 Table 3: Analysis of LSTMs’ preference for questions consistent with combinations of ‘prepose’ and ‘delete’ rules, evaluated using SLOR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Within each architecture, the proportion preferences across all 6 question types necessarily sum to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Transformers Prepose First Prepose Main Delete First 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='15 Delete Main 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='40 Delete None 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='24 Table 4: Analysis of Transformers’ preference for ques- tions consistent with combinations of ‘prepose’ and ‘delete’ rules, evaluated using SLOR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Within each ar- chitecture, the proportion preferences across all 6 ques- tion types necessarily sum to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' D BabyBERTa dataset evaluation For an illustrative subset of the results on the Zorro evaluation dataset (discussed in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5), see Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' For the full results, see Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' E Move-One Dataset Results One approach used in several past papers (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', Lewis and Elman (2001) and Reali and Chris- tiansen (2005)) is to evaluate models using pairs of sentences that can be formed by starting with a declarative sentence (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', (17)) and moving one of its auxiliaries to the front of the sentence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The first sentence in each pair (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', (18a) ) follows HIER- ARCHICALQ, because the main auxiliary is moved, while the second (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=', (18b)), follows LINEARQ because the first auxiliary is moved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (17) The children who are talking are sleeping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (18) a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Are the children who are talking sleeping?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Are the children who talking are sleeping?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='46 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='98 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='97 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='43 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='96 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='96 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='97 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='54 LSTM 02 LSTM 03 LSTM 08 Transformer 02 Transformer 03 Transformer 08 irreg_v sv_agr_rc swap_arg Zorro Evaluation Model 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 Proportion Correct Figure 4: The performance of a selected subset of model re-runs on a selected subset of the Zorro evalua- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Each Zorro evaluation targets a specific syntactic phenomenon—in the cases shown here, irregular verbs, subject-verb agreement across relative clauses, and cor- rect argument ordering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' If a model assigns a higher probability to (18a) than (18b), that is evidence that the models favors HIERARCHICALQ over LINEARQ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' While this pref- erence is a necessary component of correctly learn- ing HIERARCHICALQ, it is by no means sufficient: indeed, Kam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2008) showed that models can prefer sentences consistent with HIERARCHICALQ over sentences consistent with LINEARQ due to shallow n-gram statistics rather than due to knowl- edge of hierarchical structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' More generally, there are infinitely many other incorrect hypotheses besides LINEARQ, and demonstrating successful learning of HIERARCHICALQ would require ruling out all of them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Investigating all possibilities is intractable, but we can at least investigate a few additional plausible ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Thus, in the main paper we depart from prior work by considering a greater number of candidate sentences than just the pairs of sentences used in prior work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To create the MOVE-ONE dataset, we ran- domly sampled 10,000 declarative sentences from our CFGs for which the first and main auxiliary were identical and then modified them to give 10,000 sentence pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To create the PREPOSE- ONE&DELETE-ONE dataset, we randomly sam- pled a different 10,000 declarative sentences from our CFGs for which the first and main auxiliary were different and then we modified them to give 10,000 6-tuples of sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' See Appendix F for more details about the CFGs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' F Context Free Grammars Figure 6 contains the context-free grammar used for the analyses in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Figures 7 and 8 con- tain the context-free grammars used for the targeted evaluation sets in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Figure 9 contains the vocabulary used for all of these datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' G Breakdown by lexical identity Here we further break down models’ predictions for the FIRST-AUX ̸= MAIN-AUX evaluation set based on the identities of the two auxiliaries in the input sentence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Figure 10 gives the results for the LSTM in the QUESTION FORMATION condi- tion;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Figure 11 for the LSTM in the NEXT-WORD PREDICTION + QUESTION FORMATION condi- tion;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Figure 12 for the Transformer in the QUES- TION FORMATION condition;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' and Figure 13 for the for the Transformer in the NEXT-WORD PREDIC- TION + QUESTION FORMATION condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' H Example generated text Figure 14 gives some example text generated by our models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Models trained on next-word prediction produce their predictions as a probability distribu- tion over the vocabulary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' To use such models to generate text, we sample a word from this distribu- tion then use that word as the model’s input for the next time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='89 79 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='67 89 86 82 98 91 100 92 100 97 85 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='88 60 56 78 40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='67 87 71 59 96 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='89 100 91 100 96 85 88 61 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='46 39 39 74 88 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='78 59 98 88 88 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='62 89 79 83 98 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='59 56 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='68 41 60 86 69 66 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='95 90 87 80 89 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='68 80 99 88 100 90 100 94 85 92 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='61 98 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='87 88 85 88 80 82 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='97 90 100 93 99 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='96 83 86 59 43 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='48 39 68 90 73 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='85 97 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='91 100 93 100 95 84 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='91 60 60 82 38 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='64 89 77 63 96 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='85 79 88 89 80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='84 89 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='61 53 51 36 61 89 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='74 62 99 88 83 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='89 88 79 80 98 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='90 99 93 100 94 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='86 73 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='63 95 90 85 60 89 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='58 77 97 90 100 90 100 97 83 89 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='60 50 76 37 70 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='90 81 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='81 97 90 100 93 100 95 84 89 64 50 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='54 37 64 89 73 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='61 97 89 85 80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='100 96 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='81 87 61 55 72 36 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='73 93 73 61 98 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='87 89 64 89 83 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='81 96 90 100 91 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='38 70 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='90 76 59 97 89 84 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='78 87 85 78 98 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='88 100 91 100 97 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='82 91 58 46 69 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='79 77 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='46 89 54 62 92 77 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='99 81 99 95 72 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='75 54 53 61 42 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='50 81 64 61 83 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='82 97 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='83 99 94 71 78 54 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='43 72 45 48 85 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='71 59 96 80 74 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='58 91 62 62 93 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='54 48 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='77 41 62 79 73 58 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='88 81 79 65 91 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='59 65 95 80 93 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='88 98 96 74 78 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='60 88 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='75 74 61 92 51 63 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='95 84 100 88 98 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='93 73 80 53 38 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='78 42 53 83 73 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='70 95 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='81 99 83 98 95 74 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='77 54 36 50 43 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='47 83 65 58 90 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='78 72 82 92 53 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='74 76 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='53 44 55 42 47 80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='72 59 90 78 72 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='87 92 58 66 96 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='80 99 88 98 94 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='80 73 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='61 93 77 72 90 91 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='53 65 90 80 98 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='90 98 95 73 83 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='55 43 44 44 33 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='91 67 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='64 96 83 98 81 100 97 72 79 54 40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='58 40 54 80 71 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='60 92 83 69 38 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='98 96 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='71 75 55 47 53 42 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='49 79 86 58 88 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='79 81 64 91 61 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='62 92 84 97 85 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='42 53 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='86 74 63 88 76 80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='93 90 53 63 94 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='83 99 86 96 96 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='75 82 55 50 53 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 01 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 02 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 03 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 04 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 05 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 06 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 07 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 08 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 09 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='LSTM 10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 01 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 02 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 03 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 04 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 05 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 06 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 07 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 08 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 09 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Transformer 10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='agreement_determiner_noun−across_1_adjective ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='agreement_determiner_noun−between_neighbors ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='agreement_subject_verb−across_prepositional_phrase ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='agreement_subject_verb−across_relative_clause ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='agreement_subject_verb−in_question_with_aux ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='agreement_subject_verb−in_simple_question ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='anaphor_agreement−pronoun_gender ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='argument_structure−dropped_argument ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='argument_structure−swapped_arguments ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='argument_structure−transitive ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='binding−principle_a ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='case−subjective_pronoun ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='ellipsis−n_bar ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='filler−gap−wh_question_object ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='filler−gap−wh_question_subject ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='irregular−verb ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='island−effects−adjunct_island ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='island−effects−coordinate_structure_constraint ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='local_attractor−in_question_with_aux ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='npi_licensing−matrix_question ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='npi_licensing−only_npi_licensor ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='quantifiers−existential_there ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='quantifiers−superlative ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Evaluation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Model ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='% Correct ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Figure 5: Results on the targeted syntactic evaluations in Huebner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' (2021) in percent accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Evaluation names in Figure 4 were shortened.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_S RC_S_BARE MAIN-AUX VP_S_PAST} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_S RC_S_PAST MAIN-AUX VP_S_BARE} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_S RC_S_BARE MAIN-AUX VP_S_PROG} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_S RC_S_PROG MAIN-AUX VP_S_BARE} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_S RC_S_PAST MAIN-AUX VP_S_PROG} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_S RC_S_PROG MAIN-AUX VP_S_PAST} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_P RC_P_BARE MAIN-AUX VP_P_PAST} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_P RC_P_PAST MAIN-AUX VP_P_BARE} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_P RC_P_BARE MAIN-AUX VP_P_PROG} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_P RC_P_PROG MAIN-AUX VP_P_BARE} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_P RC_P_PAST MAIN-AUX VP_P_PROG} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_P RC_P_PROG MAIN-AUX VP_P_PAST} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Det_S N_S} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_O ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Det_S N_S | Det_P N_P | Det_S N_S Prep Det_S N_S | Det_S N_S Prep ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_S_BARE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_S IV } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_S_BARE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_S TV NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_S_PROG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_S_BE IV_IS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_S_PROG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_S_BE TV_IS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_S_PAST ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_S_HAS IV_HAS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_S_PAST ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_S_HAS TV_HAS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_P_BARE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_P IV} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_P_BARE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_P TV NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_P_PROG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_P_BE IV_IS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_P_PROG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_P_BE TV_IS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_P_PAST ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_P_HAS IV_HAS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_P_PAST ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Aux_P_HAS TV_HAS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S_BARE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_S TV Det_S N_S | Rel Aux_S TV Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S_PROG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_P Aux_P_BE TV_IS | Rel Aux_S_BE TV_IS Det_S N_S | Rel Aux_S_BE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV_IS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S_PAST ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_S_HAS TV_HAS Det_S N_S | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_S_HAS TV_HAS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P_BARE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_P TV Det_S N_S | Rel Aux_P TV Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P_PROG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_P Aux_P_BE TV_IS | Rel Aux_P_BE TV_IS Det_S N_S | Rel Aux_P_BE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV_IS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P_PAST ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_P_HAS TV_HAS Det_S N_S | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_P_HAS TV_HAS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Figure 6: CFG used to generate PREPOSE-ONE-AND-DELETE-ONE evaluation dataset ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_M_S VP_M_S | NP_M_P VP_M_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_M_S→ {Det_S N_S | Det_S N_S Prep Det_S N_S | Det_S N_S Prep Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_M_P→ {Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_O ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Det_S N_S | Det_P N_P | Det_S N_S Prep Det_S N_S | Det_S N_S Prep ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P | Det_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_S RC_S | Det_P N_P RC_P } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S IV } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S TV NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S_BE IV_IS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S_BE TV_IS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S_HAS IV_HAS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S_HAS TV_HAS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P IV} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P TV NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P_BE IV_IS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P_BE TV_IS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P_HAS IV_HAS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P_HAS TV_HAS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_S TV Det_S N_S | Rel Aux_S TV Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_P Aux_P_BE TV_IS | Rel Aux_S_BE TV_IS Det_S N_S | Rel Aux_S_BE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV_IS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_S_HAS TV_HAS Det_S N_S | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_S_HAS TV_HAS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_P TV Det_S N_S | Rel Aux_P TV Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_P Aux_P_BE TV_IS | Rel Aux_P_BE TV_IS Det_S N_S | Rel Aux_P_BE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV_IS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_P_HAS TV_HAS Det_S N_S | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_P_HAS TV_HAS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Figure 7: CFG used to generate FIRST-AUX = MAIN-AUX evaluation dataset ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {NP_M_S VP_M_S | NP_M_P VP_M_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_M_S→ {Det_S N_S | Det_S N_S Prep Det_S N_S | Det_S N_S Prep Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_M_P→ {Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='NP_O ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Det_S N_S | Det_P N_P | Det_S N_S Prep Det_S N_S | Det_S N_S Prep ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P | Det_P N_P Prep Det_S N_S | Det_P N_P Prep Det_P N_P | Det_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_S RC_S | Det_P N_P RC_P } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S IV } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S TV NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S_BE IV_IS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S_BE TV_IS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S_HAS IV_HAS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_S→ {Aux_S_HAS TV_HAS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P IV} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P TV NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P_BE IV_IS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P_BE TV_IS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P_HAS IV_HAS} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='VP_M_P→ {Aux_P_HAS TV_HAS NP_O} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_S TV Det_S N_S | Rel Aux_S TV Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_P Aux_P_BE TV_IS | Rel Aux_S_BE TV_IS Det_S N_S | Rel Aux_S_BE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV_IS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_S_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_S_HAS TV_HAS Det_S N_S | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_S_HAS TV_HAS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P IV | Rel Det_S N_S Aux_S TV | Rel Det_P N_P Aux_P TV | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_P TV Det_S N_S | Rel Aux_P TV Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P_BE IV_IS | Rel Det_S N_S Aux_S_BE TV_IS | Rel Det_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_P Aux_P_BE TV_IS | Rel Aux_P_BE TV_IS Det_S N_S | Rel Aux_P_BE ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV_IS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='RC_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {Rel Aux_P_HAS IV_HAS | Rel Det_S N_S Aux_S_HAS TV_HAS | Rel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P N_P Aux_P_HAS TV_HAS | Rel Aux_P_HAS TV_HAS Det_S N_S | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel Aux_P_HAS TV_HAS Det_P N_P} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Figure 8: CFG used to generate FIRST-AUX ̸= MAIN-AUX evaluation dataset ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {the | some | this } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Det_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {the | some | those} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {baby | girl | boy | animal | child | person | horse } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='N_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {babies | girls | boys | animals | children | people | horses } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='IV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {play | read | draw | sit | fall | talk | sleep | try | work | walk} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='IV_IS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {playing | reading | drawing | sitting | falling | talking | sleeping | trying | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='working | walking} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='IV_HAS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {played | read | drawn | sat | fallen | talked | slept | tried | worked | walked} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {call | see | find | help | feed | know | pick | visit | watch | reach} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV_IS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {calling | seeing | finding | helping | feeding | knowing | picking | visiting | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='watching | reaching} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='TV_HAS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {called | seen | found | helped | fed | known | picked | visited | watched | ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='reached} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Aux_P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {do | did | can | would | shall} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Aux_S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {does | did | can | would | shall} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Aux_S_BE → {is | was} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Aux_P_BE → {are | were} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Aux_S_HAS→ {has} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Aux_P_HAS→ {have} ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Prep ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {by | behind } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Rel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='→ {who | that } ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='Figure 9: Vocabulary used for the PREPOSE-ONE-AND-DELETE-ONE,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' FIRST-AUX ̸= MAIN-AUX,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' and FIRST- AUX = MAIN-AUX evaluation datasets Figure 10: Breakdown by the identities of the two auxiliaries for outputs in the FIRST-AUX ̸= MAIN-AUX eval- uation set for LSTMs first trained on next-word prediction and then question formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The two leftmost bars in each cell show a First-vs-main comparison, while the two rightmost bars show an AuxY-vs-AuxX comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' AuxX = AuxX = Auxx = AuxX = Auxx = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = was have can were shall p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='p would does op are s 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1 11 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 TIT 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 behavior consistent 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 Comparison 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 First-vs-main 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 AuxY-vs-AuxX 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 word 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 First 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0Figure 11: Breakdown by the identities of the two auxiliaries for outputs in the FIRST-AUX ̸= MAIN-AUX evalu- ation set for LSTMs trained only on question formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The two leftmost bars in each cell show a First-vs-main comparison, while the two rightmost bars show an AuxY-vs-AuxX comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Figure 12: Breakdown by the identities of the two auxiliaries for outputs in the FIRST-AUX ̸= MAIN-AUX evalua- tion set for Transformers first trained on next-word prediction and then question formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The two leftmost bars in each cell show a First-vs-main comparison, while the two rightmost bars show an AuxY-vs-AuxX comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' AuxX = AuxX = Auxx = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = was have can were shall p!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='p would does op are S 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 Comparison 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 First-vs-main 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 AuxY-vs-AuxX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 Aux 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content="5 0'0 1." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 -AuxX = AuxX = AuxX = AuxX = Auxx = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = was have can were shall pIp would does op are s 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 T 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 behavior consistent 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 Aux 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 05 Comparison First-vs-main 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 AuxY-vs-AuxX AuxY 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 word 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 First 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 -Figure 13: Breakdown by the identities of the two auxiliaries for outputs in the FIRST-AUX ̸= MAIN-AUX evalu- ation set for Transformers trained only on question formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The two leftmost bars in each cell show a First-vs- main comparison, while the two rightmost bars show an AuxY-vs-AuxX comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Text from CHILDES do you eat in nursery school too ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' oh .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' it ’s nursery school .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' xxx .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what do you get to eat ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what else do you get to eat besides a book ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' do n’t play with those books .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what else do you have ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what do you have in nursery school ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' alright - you can take it .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' why do n’t you put the turtle back .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' well - you ’re not going to change his water now .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' tonight .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' tonight - you change it about ten times a day .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' no - you can do it later .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' forget it now .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' too bad .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' too bad .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' that poor turtle .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' you do n’t let him sit there five minutes .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' that was five minutes ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' LSTM gen- erated text do you eat in nursery school too ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' oh .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' it ’s nursery school .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' no !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' tomorrow ’s Friday .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' right .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' yeah - yeah .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' yeah I know what .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' I told you that he liked snipsnap that way .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' well this is address the grayish kings .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' ok .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what kind of bird do you want to play with ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what else do you want to do ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' your crayons .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' got a circle .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' I ’m going to use these .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' there .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' remember ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' ok do you want to put all of these there ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' ok let ’s put everything away .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what do you want to do ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' you want to play in the box ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' it ’s just like a ball because it has a ball on it .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Transformer generated text do you eat in nursery school too ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' oh .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' it ’s nursery school .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' mm-hm .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' and it looks like a little girl .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' mm-hm .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' like a little girl .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' hm , mm-hm .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' looks like she ’s concerned about baseball this is a picture of a double football player .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' mm-hm .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' um , it has a uniform on it .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' mm-hm .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' anything more ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' no .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' ok great .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' got a hat on .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' and a flower and a little flower .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' remember that ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' do you remember that word ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what do you remember that word ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' do you remember what it word says ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' it says seven - eight .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' what ’s that word ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' it says eight .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' look at that word .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' three - four - six .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' eight - nine .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Figure 14: Comparison of text generated by the LSTM and Transformer models with a block of text chosen randomly from the training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' The LSTMs and Transformers were both seeded with the first three sentences of the text taken from CHILDES, which is the underlined in the two model generated texts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' Note that neither of the model generated texts were cherry picked either for quality or to be representative of the models’ usual output: rather they were the first things they generated when seeded with the above underlined portion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content=' AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = AuxX = was have can were shall pIp pinom does op are s 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 rule 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 I behavior consistent with 881 18 AUXY 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 Comparison First-vs-main AuxY-vs-AuxX 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} +page_content='0' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QtFJT4oBgHgl3EQfJyy0/content/2301.11462v1.pdf'} diff --git a/RtAyT4oBgHgl3EQf7_pZ/content/2301.00848v1.pdf b/RtAyT4oBgHgl3EQf7_pZ/content/2301.00848v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5ab1cc6ec794397cab36bb8c0f5ca90b4295f0bc --- /dev/null +++ b/RtAyT4oBgHgl3EQf7_pZ/content/2301.00848v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9d578434181fb13363567efd058fc893f5dbf2f36c12130fabc505f8d318caf +size 1347252 diff --git a/RtAyT4oBgHgl3EQf7_pZ/vector_store/index.pkl b/RtAyT4oBgHgl3EQf7_pZ/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..9af7d3c5ffef5c9690927b6ade20a2e58448fd6a --- /dev/null +++ b/RtAyT4oBgHgl3EQf7_pZ/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d140dbbe72b13b6aa6372bc96886ac7400b2632b877c2803e4142122cb3940f +size 214793 diff --git a/RtE2T4oBgHgl3EQfBwaC/vector_store/index.pkl b/RtE2T4oBgHgl3EQfBwaC/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..d18d9419f6a801c71ea1ef21e9a5b6d7bd15a6c5 --- /dev/null +++ b/RtE2T4oBgHgl3EQfBwaC/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f4775dbd20ea8e3ffc53d6b12a89bd015e888469125e29829dbffbbf4107068 +size 108780 diff --git a/StAzT4oBgHgl3EQfXfxy/vector_store/index.pkl b/StAzT4oBgHgl3EQfXfxy/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..5b5e6ccb1e826a1a590f0d5453efdb18f73649cb --- /dev/null +++ b/StAzT4oBgHgl3EQfXfxy/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:288229a43ffa21712422b64ad15bebba206964573f1293f2aec5d9c11a9cf035 +size 316383 diff --git a/TNE0T4oBgHgl3EQfUwCf/content/tmp_files/2301.02255v1.pdf.txt b/TNE0T4oBgHgl3EQfUwCf/content/tmp_files/2301.02255v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..d432c1b0f14a5ee43847f5a52959f9fdea6962bf --- /dev/null +++ b/TNE0T4oBgHgl3EQfUwCf/content/tmp_files/2301.02255v1.pdf.txt @@ -0,0 +1,972 @@ +Layer photovoltaic effect in van der Waals heterostructures +Oles Matsyshyn, Ying Xiong, Arpit Arora, Justin C. W. Song∗ +Division of Physics and Applied Physics, School of Physical and Mathematical Sciences, +Nanyang Technological University, Singapore 637371, +We argue that the layer electric polarization of noncentrosymmetric layered heterostructures can +be generically controlled by light yielding a layer photovoltaic effect (LPE). The LPE possesses a +rich phenomenology and can arise from myriad distinct mechanisms displaying strong sensitivity to +symmetry (e.g., point group and time-reversal) as well as the presence/absence of a Fermi surface. +We systematically classify these and unveil how LPE manifests for a range of light polarizations and +even for unpolarized light. These unusual layer photoresponses can be realized in a range of layered +heterostructures such as bilayer graphene aligned on hexagonal Boron Nitride and manifest sizeable +layer polarization susceptibilities in the terahertz frequency range that can be used as novel means +of bulk photodetection. +Mechanical stacks of atomically thin van der Waals +(vdW) materials enable to build quantum phases from +the bottom-up with properties that go beyond that of +its individual constituent components [1, 2]. A particu- +larly striking example is the emergence of a layer degree +of freedom in stacks. Manipulating the relative degree +with which each of the layers is charged, as character- +ized by its static interlayer polarization, affords means +to dramatically engineer bandstructure [3, 4], tune quan- +tum geometric properties [5–8], as well as realize corrre- +lated phases of matter [9–12]. Since interlayer polariza- +tion points out-of-plane, it is highly sensitive to vertical +displacement fields. As a result, it has been traditionally +controlled by toggling voltages sustained across a dual +top and bottom gate sandwich architecture [3]. +Here we argue that interlayer polarization in noncen- +trosymmetric layered heterostructures can be generically +controlled by light manifesting a layer photovoltaic ef- +fect (LPE). Such LPE responses appear second-order in +the incident light electromagnetic (EM) field, and, as we +show below, come in myriad distinct types; by perform- +ing a systematic classification we delineate LPEs with +distinct symmetry constraints, light polarization depen- +dence, as well as physical origins. Importantly, we find +LPEs can arise from both resonant interband absorption +as well as off-resonant virtual processes in either metallic +or insulating states, providing versatile means to control +interlayer polarization across different phases of matter. +We note that an example of LPE was recently pre- +dicted in chiral vdW bilayers where the interband absorp- +tion of circularly polarized light in such handed stacks +induces a helicity dependent photoinduced interlayer po- +larization [13]. Our work systematically shows that there +exist a wide range of LPE interlayer responses beyond +those known previously. For instance, at high frequen- +cies corresponding to interband transitions we find an +injection-like process enables non-helical light to induce a +(second-order) nonlinear LPE even in an achiral and non- +centrosymmetric vdW layered heterostructure, see Fig. 1. +Surprisingly, the injection-like process also produces an +interlayer current even as the electrons do not possess +hBN +Top layer +Bottom layer +⟨pz⟩ +BLG +2d +FIG. 1: +Layer photovoltaic effect and interlayer polariza- +tion. +Photoinduced nonlinear interlayer polarization (here +denoted by pz) in noncentrosymmetric van der Waals stacks; +we term this the layer photovoltaic effect (LPE). Here an ex- +ample of a non-centrosymmetric and achiral vdW structure +is shown: bilayer graphene aligned with hexagonal Boron Ni- +tride (BLG/hBN). These achiral structures possess LPE in- +duced by non-helical light. +an out-of-plane velocity. Instead, this intrinsic interlayer +current arises from the pumping of the interlayer polar- +ization. Additionally, even at low frequency without in- +terband transitions, we find new types of large LPE re- +sponses that can be induced in the metallic regime. As +we will see, these latter metallic contributions arise from +the momentum-space asymmetry in the layer polariza- +tion of Bloch states on the Fermi surface. +We anticipate that the LPEs we unveil can be used +in novel bulk photodetection schemes that do not re- +quire p-n junctions. +Since many non-centrosymmetric +vdW stacks are achiral possessing mirror symmetries that +render helicity dependent LPE vanishing, non-helical +LPEs are crucial in activating interlayer polarization re- +sponses. Indeed, as we discuss below, the injection and +metallic LPEs we unveil in our work can achieve giant +susceptibility values in bilayer graphene (BLG) aligned +with hexagonal boron nitride (hBN) heterostructures, or- +ders of magnitude larger than those reported in chiral +arXiv:2301.02255v1 [cond-mat.mes-hall] 5 Jan 2023 + +2 +stacks [13] and manifesting even for unpolarized light. +Interlayer polarization response. We begin by directly +examining the LPE response which is directly connected +to the layer degree of freedom, l. The interlayer polar- +ization operator is +ˆP z = ed∑ +αℓ +ˆl ∣αl⟩⟨αl∣ = ˆpzd, +(1) +where ˆl is the layer index operator: ˆl ∣αl⟩ = l ∣αl⟩, and +∣αl⟩ are orbitals localized on layer l. +For clarity, here +we concentrate on a bilayer system with an interlayer +distance 2d (see Fig. 1). Our theory, however, is general +and can be readily applied to multi-layered systems. +When light is normally incident on the vdW stack +(see Fig. 1), an out-of-plane static interlayer polarization +can be induced. To see this, first consider the Hamil- +tonian: +ˆH(k,t) = H0(k) + HE(k,t) where H0(k) is the +bare Hamiltonian with ∣unk⟩ and ϵn(k) the correspond- +ing Bloch states and eigenenergies; here and below, ro- +man indices denote band indices. HE(k,t) describes the +light-matter interaction. For a monochromatic EM field, +HE(k,t) = eˆr ⋅ [EeiΩt + E∗e−iΩt]eηt [14, 15], with ˆr the +position operator, η → 0+ an adiabatic turn-on parame- +ter, and Ω is the frequency of the light. +The LPE can be obtained from Eq. (1) as ⟨P z(t)⟩ = +∫ Tr[ˆρ(t) ˆPz]dk/(2π)2 where ˆρ is the density matrix. +Here the evolution of the density matrix and the result- +ing photoinduced interlayer polarization can be tracked +in a standard perturbative fashion, see full details in Sup- +plementary Information (SI). This produces a second- +order nonlinear photoinduced static interlayer polariza- +tion, ⟨δP z +st⟩, characterized by an LPE susceptibility ten- +sor χ(ω) as: +⟨δP z +st⟩ = 2d∑ +αβ +Re[EαEβ∗χαβ(Ω)], +(2) +where α,β are spatial indices (x or y). We will show there +are five contributions to χαβ(ω) with distinct physical +origins, symmetry properties, and phenomenology. +To proceed, it is useful to delineate between inter- +band and intraband responses and for concreteness we +will confine ourselves to band non-degenerate systems. +Three contributions comprise interband responses: In- +jection (I), Shift (S) and Fermi-Sea (FS): +χαβ +inter(ω) = χαβ +I (ω) + χαβ +S (ω) + χαβ +FS(ω), +(3) +where χI and χS describe LPE arising from resonant real +interband excitations, whereas χFS is off-resonant. +The injection susceptibility χαβ +I (ω) = τσαβ +inter(ω)/4 with +σαβ +inter(ω) = πe2 +̵h2 +∑ +n,m,k +δ(ω + ωnm)Aα +nmAβ +mnfnmδPmn, (4) +where fnm = f[ϵn(k)] − f[ϵm(k)] is the difference be- +tween Fermi functions in different bands, ̵hωnm = ϵn(k)− +SC +FS +shift +injection +reported in +� + + + + +this work +� + + + + +Ref.[13] +TABLE I: LPE mechanisms in TRS preserving systems. � in- +dicates non-helical mechanisms (induced by linearly polarized +light), while � indicates helical responses (induced by circu- +larly polarized light).  denotes allowed,  indicates forbid- +den. SC and FS are semiclassical and Fermi Sea respectively. +Note that χB (see text) is forbidden when TRS is preserved, +but becomes activated when TRS is broken. +ϵm(k), and δPmn = pz +mm − pz +nn is the difference between +layer polarization between the final and initial states +states. pz +nm = ⟨unk∣ˆpz∣umk⟩ is a matrix element of the +polarization operator and Aα +nm = i⟨unk∣∂kα∣umk⟩ is the +interband Berry connection [16]. Here τ is a phenomeno- +logical relaxation time [17] that regularizes the χI re- +sponse. +χI(ω) represents the first new result of our work and +arises from the contrasting interlayer polarization when +an electron transitions from state n,k → m,k: its polar- +ization changes from pnn → pmm. As we will argue be- +low, this process also yields an anomalous photoinduced +interlayer current, controlled by an interlayer conductiv- +ity σαβ +inter(ω). This anomalous interlayer current acts as +a source that pumps the interlayer electric polarization. +As a result, χI(ω) grows with τ yielding large LPE. This +picture is similar to how bulk injection photocurrents are +often understood as arising from a photoinduced acceler- +ation [17, 18]. +Injection LPE contrasts with that of the shift LPE, +χS(ω), recently discussed in Ref. [13]: +χαβ +S (ω) = π e2 +̵h2 +∑ +n,m,k +δ(ω + ωnm)fnmAα +nmMβ +mn, +(5) +where +Mβ +mn = ∂β pmn +ωmn +− i∑ +c +[Aβ +mc +¯pz +cn +ωcn +− ¯pz +mc +ωmc +Aβ +cn], +(6) +and ¯pz +nm = pz +nm(1 − δnm). χS is intrinsic (τ independent) +and arises from an interlayer coordinate shift that is non- +vanishing in chiral media. +In contrast to the other interband responses, χFS(ω) +does not require real transitions. Instead, it corresponds +to nonlinear interlayer polarization sustained even for +light with frequency below the bandgap of an insulator. + +3 +It is written as χαβ +FS(ω) = χαβ +FS,1(ω) + χαβ +FS,2(ω), where +χαβ +FS,1(ω) = e2 +2̵h2 +∑ +n,m,k +Aα +nmAβ +mnfnmP [ pz +mm − pz +nn +(ω + ωnm)2 ], (7) +χαβ +FS,2(ω) = i e2 +̵h2 +∑ +n,m,k +fmnAα +nmMβ +mnP [ +1 +ω + ωnm +], +(8) +where P denotes the principal part. Strikingly, this off- +resonant LPE survives even for insulators (unlike its pho- +tocurrent counterpart [19, 20]). As a result, we denote it +Fermi Sea LPE since it arises from virtual processes be- +tween completely occupied and unoccupied bands. χFS +proceeds in much the same fashion as that of the con- +ventional dielectric response in insulators where similar +virtual processes contribute to the dynamical screening. +Indeed, χFS can be understood as its nonlinear rectified +counterpart. +The last new LPEs we unveil are intraband in na- +ture: +these depend on the presence of a Fermi sur- +face and exhibit a low-frequency divergence characteris- +tic of metallic responses in the clean limit. +These are +the semiclassical (SC) and Berry (B) LPE responses: +χintra(ω) = χSC(ω) + χB(ω), with SC susceptibility: +χαβ +SC(ω) = e2 +2̵h2 ∑ +n,k +∂α∂βfn +ω2 + τ −2 pz +nn, +(9) +and Berry susceptibility: +χαβ +B (ω) = e2 +̵h2 +∑ +n,m,k +pz +nmAα +mni∂βfnm +(ω + iτ −1)ωnm +, +(10) +where intra-band responses are regularized with a relax- +ation time τ [21]. Note that χB shares a similar density +matrix origin to its counterpart in the more familiar but +distinctly different photocurrent response (the Berry cur- +vature dipole induced nonlinear Hall effect [22]). +χSC(ω) has a semiclassical origin: it arises from a DC +shift (in momentum space) of the metallic Fermi sur- +face induced by periodic driving; this enables to pick +out a dipolar distribution of pnn(k) in momentum space. +χB(ω) arises from interband coherences sustained from +the periodic driving; unlike the other responses we have +discussed, χB(ω) has an odd-parity under time-reversal +(c.f. +∂βf term), vanishing in non-magnetic materials. +In what follows, we will focus on LPEs in time-reversal +symmetry (TRS) preserving systems. +Intrinsic out-of-plane interlayer current. We now pro- +ceed to argue that the origin of the large injection LPE +arises from an anomalous out-of-plane interlayer current +induced by oscillating in-plane electric fields. To see this, +we note that the interlayer electric current is naturally +described by ˆjz = d ˆPz/dt = [ ˆPz, ˆH]/(i̵h) [23]. Computing +the expectation value of the interlayer current, ⟨jz(t)⟩, +we find +Tr[ˆjzρ(t)] = 1 +i̵hTr{[ ˆPz, ˆH]ρ(t)} = Tr[P z ˙ρ(t)], +(11) +where we have noted the cyclic property of the trace +Tr{[A,B]C} = Tr{A[B,C]} as well as employed the Li- +ouville equation i̵hdˆρ(t)/dt = [ ˆH(k,t), ˆρ(t)]. In order to +isolate the rectified interlayer current, we focus on the +period average jz +rectified = [∫ +T +0 dtlimη→0 ⟨jz(t)⟩]/T where +T = 2π/Ω is the period of the drive EM field. For a finite +drive frequency Ω, this directly produces an out-of-plane +interlayer current +jz +rectified = 2dRe[EαEβ∗σαβ +inter(Ω)], +(12) +that is driven by an oscillating in-plane electric field E. +Here σαβ +inter(Ω) is the interlayer nonlinear conductivity +found in Eq. (4). Interestingly, Eq. (4) depends only on +intrinsic band geometric quantities (e.g., Aα +nm, δPmn). +We note that second-order nonlinear photocurrent sus- +ceptibilities have recently been the subject of intense in- +vestigation [14, 15, 21, 24–34]. These have concentrated +on photocurrents formed from bulk itinerant electrons +with a well-defined velocity. In contrast, σαβ +inter(Ω) de- +scribes out-of-plane current in a vdW stack hosting elec- +trons that do not have a z-direction velocity. Instead, the +interlayer current can be understood as a type of electric +polarization pump that injects polarization. +Symmetry properties of LPE. The mechanisms for LPE +discussed above have distinct symmetry properties. To +see this, we re-write Eq. (2) as: +⟨δP z +st⟩/d = (EαE∗β + E∗αEβ) +����������������������������������������������������������������������������������������������������������������� +Linearly polarised light +1 +2 [χαβ(Ω) + χαβ(−Ω)] +����������������������������������������������������������������������������������������������������������������������������������������������� +Re ++ ++ (iEαE∗β − iE∗αEβ) +����������������������������������������������������������������������������������������������������������������������������� +Circularly polarised light +1 +2i [χαβ(Ω) − χαβ(−Ω)] +������������������������������������������������������������������������������������������������������������������������������������������������������ +Im +, +(13) +displaying how the real (imaginary) parts of the suscep- +tibility tensor control the response to linearly polarized +(circularly polarized) irradiation. Recalling that under +time-reversal symmetry we have Anm(k) = Amn(−k) +and pnm(k) = pmn(−k) we obtain the non-helical (lin- +ear) vs helical (circular) classification in Table I, namely: +χI, χFS and χSC mediate responses to linearly polarized +light but are helicity insensitive; χS, in contrast, only +arises under circularly polarized irradiation. Naturally, +inversion symmetry zeroes out all LPE responses, see SI. +Point group symmetries also play a critical role in con- +straining the LPE. For instance, in-plane mirror symme- +try My forces the off-diagonal components of the non- +linear LPE susceptibility tensor to vanish: +χxy(ω) = +χyx(ω) = 0. This disables helicity dependent LPE. As a +result, achiral vdW stacks (i.e. ones with a mirror plane) +do not possess a helicity dependent LPE. As a result, + +4 +× 105 +× 104 +a) +b) +insulator +metal +μ +∼ 15meV +μ +∼ 25meV +0 +−2.5 ⋅ 105 +semiclassical +∼ τ2 +χ0(ω),[ +e +V 2] +FIG. 2: Nonhelical LPE responses in vdW heterostructure. BLG/hBN LPE susceptibility tensor χ(ω) = χ0(ω)I in the (a) +insulating state (µ = 10 meV in the gap) (b) metallic state (µ = 20 meV) numerically evaluated using the low-energy Hamiltonian +in Eq. (14). Both χI (orange) and χFS (green) contribute to the total response (purple) in the insulating state. In the metallic +state, an additional metallic χSC (yellow) emerges that dominates at low frequencies. Right inset in both panels display the +low-energy bandstructure of BLG/hBN; µ indicates the Fermi level. Left inset in panel b shows a zoom-in of the gray region. +Parameters used: τ = 1 ps and ∆ = 30 meV. +comparing with Table I, in these systems we find that +LPE proceed from χI, χFS and χSC only; χS vanishes. +In contrast, in chiral stacks that possess high crys- +talline symmetries, the opposite can be true. The com- +bination of Cnz (n ≥ 3) and C2x point-group rota- +tional symmetries can render non-helical LPEs vanishing +(see Ref. [13] for an explicit example in twisted bilayer +graphene as well as full symmetry analysis in SI). Of +course, in chiral vdW stacks where at least one of these +point group rotational symmetries are broken, both he- +licity dependent and non-helical LPEs are allowed. +Non-helical LPE response in BLG/hBN. To exemplify +the non-helical LPE response from χI, χFS and χSC in +TRS preserving systems, we focus on an achiral vdW +system: bilayer graphene aligned with hexagonal Boron +nitride (BLG/hBN). Aligned BLG/hBN breaks inver- +sion symmetry, possesses C3z and My symmetries while +breaking C2x (see Fig. 1). As a result, only non-helical +LPE responses are allowed; χS vanishes. +Indeed, the +presence of both C3z and My guarantee χ(ω) = χ0(ω)I, +allowing LPE to manifest for unpolarized light. +We model the long-wavelength electronic excitations of +BLG/hBN using an minimal low-energy Hamiltonian +ˆH = +⎛ +⎜⎜⎜ +⎝ +∆/2 +vπ† +0 +v3π +vπ +−∆/2 γ1 +0 +0 +γ1 +0 +vπ† +v3π† +0 +vπ +0 +⎞ +⎟⎟⎟ +⎠ +, +(14) +where v = 0.639eV nm is the Dirac velocity of graphene, +v3 = 0.081eV nm characterizes trigonal warping, γ1 = +0.4eV is the interlayer hopping, and π = ξkx + iky where +ξ = ±1 is the valley index. Using Eq. (1) the polarization +operator reads as ˆpz = diag(1,1,−1,−1). +Responses of +different valleys are added. ∆ is the AB sublattice asym- +metry induced by aligning one side with hBN thereby +breaking inversion symmetry and opening a gap in the +spectrum (see inset in Fig. 2). In what follows we will +concentrate on low frequencies up to the terahertz range +where large LPEs manifest. This is smaller than the en- +ergy range (150−200meV) where superlattice effects from +the hBN alignment ensue [35]. +The LPE in BLG/hBN was numerically evaluated us- +ing Eqs. (4), (7), (8), and (9) at low temperature and +summed across both valleys for the electronic states in +Eq. (14); LPE susceptibilities are plotted in Fig. 2, see +SI for a full discussion of the numerical details. We find +interband LPEs peak for frequencies close to the gap size, +see Fig. 2a where χI and χFS are plotted when the chem- +ical potential is in the gap. This indicates that both χI +(orange) and χFS (green) are dominated by interband +processes close to the band edge. +Interestingly, when the chemical potential is moved +into the conduction band (Fig. 2b), a new metallic peak +in the nonlinear LPE response emerges at low frequencies +that corresponds to χSC (yellow); the interband LPE re- +sponses still persist but now appear at higher frequencies +due to Pauli blocking (see right inset). The metallic peak +is particularly striking since it displays large responses +(left inset) even for frequencies below any interband op- +tical transition, as well as the opposite sign of suscepti- +bilities as compared to the interband contributions. +The LPE we unveil demonstrates how stacking can in- +troduce new classes of responses not found in a single +layer. Indeed, we anticipate that χI and χSC can produce +large LPE several orders of magnitude larger than that +previously known, e.g., in Ref. [13]. For instance, close +to the interband peak in BLG/hBN heterostructures, we +find a large interlayer surface charge density difference of +order 1 nCcm−2 (this corresponds to an interlayer volt- +age of order 2mV) can be sustained even for modest light + +total +- injection +0.6 +- fermi-sea +0.4 +0.2 +LPE +0 +5 +10 +15 +20 +25 +30 +40 +light frequency, w [meV]0.8 +-total +semiclassical +0.4 +injection +fermi-sea +0 +-0.4 +-0.8 +0 +10 +CY +20 +25 +30 +35 +40 +light frequency, w [meV]5 +intensity of 1000W cm−2. At very low frequencies, LPE +is expected to be even more pronounced, yielding up to +5mV interlayer voltage under the same light intensity +(see Fig.2b left inset). Such interlayer voltages can be +readily detected using capacitive probes [36, 37] or scan- +ning electron transistors [38], and are not just confined +to BLG/hBN (that we have focussed on for a concrete +illustration). Indeed, we expect that LPEs are generic +and will manifest in the wide zoo of noncentrosymmetric +layered heterostructures available, e.g., layered transition +metal dichalcogenides. +In addition to providing novel +means of photodetection (especially in the THz regime), +given the large LPE susceptibilities, the photoinduced in- +terlayer polarizations may even enable light-driven means +of switching the electric polarization in a range of vdWs +layered ferroelectrics that have recently become available +[39–41]. +Acknowledgements. This work was supported by the +Ministry of Education Singapore under its MOE AcRF +Tier 3 Grant No. MOE 2018-T3-1-002 and a Nanyang +Technological University start-up grant (NTU-SUG). +∗ Electronic address: justinsong@ntu.edu.sg +[1] L. Balents, C. R. Dean, D. K. Efetov, and A. F. Young, +Nature Physics 16, 725 (2020). +[2] J. C. Song and N. M. Gabor, Nature nanotechnology 13, +986 (2018). +[3] Y. Zhang, T.-T. Tang, C. Girit, Z. Hao, M. C. Martin, +A. Zettl, M. F. Crommie, Y. R. Shen, and F. Wang, +Nature 459, 820 (2009). +[4] Q. Tong, H. Yu, Q. Zhu, Y. Wang, X. Xu, and W. Yao, +Nature Physics 13, 356 (2017). +[5] W. Yao, D. Xiao, and Q. Niu, Physical Review B 77, +235406 (2008). +[6] J. C. Song and M. A. Kats, Nano Letters 16, 7346 (2016). +[7] J. Yin, C. Tan, D. Barcons-Ruiz, I. Torre, K. Watanabe, +T. Taniguchi, J. C. Song, J. Hone, and F. H. Koppens, +Science 375, 1398 (2022). +[8] C. Ma, S. Yuan, P. Cheung, K. Watanabe, T. Taniguchi, +F. Zhang, and F. Xia, Nature 604, 266 (2022). +[9] G. Chen, L. Jiang, S. Wu, B. Lyu, H. Li, B. L. Chittari, +K. Watanabe, T. Taniguchi, Z. Shi, J. Jung, et al., Nature +Physics 15, 237 (2019). +[10] H. Zhou, T. Xie, A. Ghazaryan, T. Holder, J. R. Ehrets, +E. M. Spanton, T. Taniguchi, K. Watanabe, E. Berg, +M. Serbyn, et al., Nature 598, 429 (2021). +[11] H. Zhou, T. Xie, T. Taniguchi, K. Watanabe, and A. F. +Young, Nature 598, 434 (2021). +[12] S. C. de la Barrera, S. Aronson, Z. Zheng, K. Watanabe, +T. Taniguchi, Q. Ma, P. Jarillo-Herrero, and R. Ashoori, +Nature Physics pp. 1–5 (2022). +[13] Y. Gao, Y. Zhang, and D. Xiao, Phys. Rev. Lett. 124, +077401 (2020). +[14] C. Aversa and J. E. Sipe, Phys. Rev. B 52, 14636 (1995). +[15] J. E. Sipe and A. I. Shkrebtii, Phys. Rev. B 61, 5337 +(2000). +[16] E. Blount, in Solid State Physics, edited by F. Seitz and +D. Turnbull (Academic Press, New York, 1962), vol. 13, +pp. 305–73. +[17] J. Ahn, G.-Y. Guo, and N. Nagaosa, Phys. Rev. X 10, +041041 (2020). +[18] F. de Juan, A. G. Grushin, T. Morimoto, and J. E. +Moore, Nature Communications 8, 15995 (2017), ISSN +2041-1723. +[19] L. Gao, Z. Addison, E. J. Mele, and A. M. Rappe, Phys. +Rev. Res. 3, L042032 (2021). +[20] H. Watanabe and Y. Yanase, Phys. Rev. X 11, 011001 +(2021). +[21] O. Matsyshyn and I. Sodemann, Phys. Rev. Lett. 123, +246602 (2019). +[22] I. Sodemann and L. Fu, Phys. Rev. Lett. 115, 216806 +(2015). +[23] R. Resta and D. Vanderbilt, Theory of Polarization: A +Modern Approach (Springer Berlin Heidelberg, Berlin, +Heidelberg, 2007), pp. 31–68, ISBN 978-3-540-34591-6. +[24] L.-k. Shi, O. Matsyshyn, J. C. W. Song, and I. Sodemann +Villadiego, arXiv e-prints (2022), 2207.03496. +[25] O. Matsyshyn, J. C. W. Song, I. Sodemann Villadiego, +and L.-k. Shi, arXiv e-prints (2023), 2301.00811. +[26] R. von Baltz and W. Kraut, Phys. Rev. B 23, 5590 +(1981). +[27] B. I. Sturman and V. M. Fridkin, The photovoltaic and +photorefractive effects in noncentrosymmetric materials, +no. v. 8 in Ferroelectricity and related phenomena (Gor- +don and Breach Science Publishers, Philadelphia, 1992). +[28] J. A. Brehm, S. M. Young, F. Zheng, and A. M. Rappe, +The Journal of Chemical Physics 141, 204704 (2014). +[29] T. Morimoto and N. Nagaosa, Science Advances 2 (2016). +[30] N. Nagaosa and T. Morimoto, Advanced Materials 29, +1603345 (2017). +[31] D. E. Parker, T. Morimoto, J. Orenstein, and J. E. +Moore, Phys. Rev. B 99, 045121 (2019). +[32] O. Matsyshyn, F. Piazza, R. Moessner, and I. Sodemann, +Phys. Rev. Lett. 127, 126604 (2021). +[33] W. Kraut and R. von Baltz, Phys. Rev. B 19, 1548 +(1979). +[34] O. Matsyshyn, U. Dey, I. Sodemann, and Y. Sun, J. Phys. +D 54, 404001 (2021). +[35] M. Yankowitz, J. Xue, D. Cormode, J. D. Sanchez- +Yamagishi, +K. Watanabe, +T. Taniguchi, +P. Jarillo- +Herrero, P. Jacquod, and B. J. LeRoy, Nature physics +8, 382 (2012). +[36] A. F. Young and L. S. Levitov, Phys. Rev. B 84, 085441 +(2011). +[37] A. F. Young, C. R. Dean, I. Meric, S. Sorgenfrei, H. Ren, +K. Watanabe, T. Taniguchi, J. Hone, K. L. Shepard, and +P. Kim, Phys. Rev. B 85, 235458 (2012). +[38] J. Martin, B. E. Feldman, R. T. Weitz, M. T. Allen, and +A. Yacoby, Phys. Rev. Lett. 105, 256806 (2010). +[39] Z. Zheng, Q. Ma, Z. Bi, S. de la Barrera, M.-H. Liu, +N. Mao, Y. Zhang, N. Kiper, K. Watanabe, T. Taniguchi, +et al., Nature 588, 71 (2020), ISSN 1476-4687. +[40] X. Wang, K. Yasuda, Y. Zhang, S. Liu, K. Watanabe, +T. Taniguchi, J. Hone, L. Fu, and P. Jarillo-Herrero, Na- +ture Nanotechnology 17, 367 (2022). +[41] K. Yasuda, X. Wang, K. Watanabe, T. Taniguchi, and +P. Jarillo-Herrero, Science 372, 1458 (2021). +[42] L. Wang, I. Meric, P. Huang, Q. Gao, Y. Gao, H. Tran, +T. Taniguchi, K. Watanabe, L. Campos, D. Muller, et al., +Science 342, 614 (2013). + +6 +Supplementary Information for “Layer photovoltaic +effect in van der Waals heterostructures” +Density matrix and perturbation theory +In this section, we discuss perturbative corrections +to the density matrix in the presence of an irradiating +electromagnetic field; this is used to directly compute +the interlayer polarization responses found in the main +text. Starting from the Liouville equation: i̵hdˆρ(t)/dt = +[ ˆH(k,t), ˆρ(t)], with electric field employed in the length +gauge ˆH(k,t) = H0(k) + eˆr ⋅ E(t)eηt, we compute per- +turbative corrections for density matrix (DM): ˆρ = ˆρ(0) + +ˆρ(1)+ ˆρ(2)+O(E3), where the index (0,1,2) represents the +order of corrections. The second order correction is given +by: +ρ(2) +nm(t) = e2 +̵h2 ∬ +dω2dω1 +(2π)2 Eα(ω2)Eβ(ω1)e−i(ω1+ω2)t+2ηt +× +⎧⎪⎪⎨⎪⎪⎩ +δnm +i∂βi∂αfn +(ω1 + ω2 + 2iη)(ω2 + iη)+ ++ +1 +ω1 + ω2 − ωnm + 2iη i∂β +Aα +nmfmn +ω2 − ωnm + iη + ++ +Aβ +nmi∂αfmn +(ω2 + iη)(ω1 + ω2 − ωnm + 2iη)+ +1 +ω1 + ω2 − ωnm + 2iη ∑ +c +[ Aβ +ncAα +cmfmc +ω2 − ωcm + iη − Aα +ncAβ +cmfcn +ω2 − ωnc + iη ] +⎫⎪⎪⎬⎪⎪⎭ +. +(S1) +Using Eq. (S1) and Eq. (1) in the main text, the total +polarization response is given by: +⟨P z⟩/d = +∞ +∑ +i=0∫ +dk +(2π)2 ∑ +nm +pz +nmρ(i) +mn. +(S2) +In the main text, we focussed on the i = 2 contribution +since it gives the leading DC (i.e. static) photoinduced in- +terlayer polarization. For monochromatic light, the DC +contribution can be obtained after averaging the polar- +ization over one period: ⟨fst⟩ = ∫ +T +0 f(t)dt/T. In so doing +we concentrated on ω1 +ω2 = 0 contributions in Eq. (S1). +The injection [Eq.(4)], shift [Eq.(5)], and Fermi-Sea +[Eqs.(7) and (8)] contributions to the LPE in the main +text can be obtained by plugging the terms in the third +and fifth lines of Eq.(S1) into Eq. (S2). The contributions +can be naturally delineated into resonant (includes delta +functions, e.g., for χI and χS) and off-resonant (involves +principal parts, e.g., χFS) in the limit of vanishingly small +η → 0. Similarly, the SC and Berry contributions to the +LPE can be obtained directly by substituting the terms +in the second and fourth lines of Eq.(S1) into Eq. (S2) +respectively. +Interlayer current +In this section, we provide a fuller account of the in- +trinsic interlayer current. First, for the convenience of +the reader, we recall that the interlayer current operator +can be written as: +ˆjz = d ˆP z +dt += 1 +i̵h[ ˆP z, ˆH]. +(S3) +As a result, the time varying interlayer current can be +directly evaluated as +⟨jz(t)⟩ = Tr[ˆjz ˆρ(t)] = 1 +i̵hTr{[ ˆP z, ˆH]ˆρ(t)} = += 1 +i̵hTr{ ˆP z[ ˆH, ˆρ(t)]}, +(S4) +where we used the cyclic permutation property of the +trace: Tr{[A,B]C} = Tr{A[B,C]}. +Importantly, the Liouville equation for the full system +requires i̵hdˆρ(t)/dt = [ ˆH, ˆρ(t)]. As a result, we can di- +rectly identify the interlayer current as +⟨jz(t)⟩ = Tr{ ˆP z ˙ρ(t)}, +(S5) +thus reproducing Eq. (11) of the main text. The recti- +fied component of the interlayer current can be directly +obtained as the period average of ⟨jz(t)⟩: +jz +rectified = ∫ +T +0 +dt +T lim +η→0Tr{ ˆP z ˙ρn(t)} + O(E4). +(S6) +Crucially, it is the time derivative of the density matrix +that controls the rectified interlayer current response. By +directly taking a time derivative of the second order cor- +rection to the density matrix in Eq. (S1), we find that +there are only two terms in Eq. (S1) that generate a +finite contribution to the current above: i.e. +terms in +ρ(2) +nm(t) that originally corresponded to the injection and +semiclassical LPEs. Notice, however, that the latter con- +tribution to the intrinsic interlayer current in Eq. (S6) +[after contraction with electric fields and symmeterizing +α ↔ β,ω ↔ −ω] displays a delta function peak at zero +frequency δ(ω). As a result, at finite non-zero frequen- +cies only the injection type response remains in the limit +η → 0. +Time-reversal, inversion and spatial symmetries +This section will briefly dicuss constraints dictated by +time-reversal, inversion and spatial symmetries for the +LPE susceptibility tensor. +We first focus on time-reversal symmetry (T ). TRS +produces the following relations for the matrix ele- +ments of the Berry connection, polarization operator, +Berry connections, as well as the band energy difference: + +7 +My +C3z +C2x +C2x + C3z +LPL, � + + + + +CPL, � + + + + +TABLE SI: +This table summarises whether LPE responses +are allowed () or forbidden () in systems exposed to inci- +dent light with either circular (CPL) or linearly (LPL) po- +larisation in the presence of spatial symmetries C2x, C3z and +My. +Anm(k) = Amn(−k), ωnm(k) = ωnm(−k), pnm(k) = +pmn(−k). Applying these relationships, we find that the +Berry LPE response [in Eq. (10)] vanishes in the pres- +ence of TRS (since ∂kf is odd while Anm is even under +k → −k). In contrast, in the presence of TRS, SC LPE re- +sponse persists. The interband LPE responses, however, +have susceptibilities that obey +χαβ +FS(ω) = χαβ +FS(−ω) = χβα +FS(ω), +(S7) +χαβ +S (ω) = −χαβ +S (−ω), +(S8) +χαβ +I (ω) = χαβ +I (−ω) = χβα +I (ω), +(S9) +in the presence of TRS. +Note that in a similar way, the presence of inver- +sion symmetry (I) demands: +Anm(k) = −Anm(−k), +ωnm(k) = ωnm(−k), pnm(k) = −pnm(−k). As a result, in- +version symmetry forces all second-order LPE responses +to vanish; broken inversion symmetry is required to real- +ize LPE responses as expected. +Point group symmetry play an additional critical role +in constraining LPE responses. +For instance, in-plane +mirror symmetry My forces off-diagonal components of +the susceptibility tensor to vanish χxy(ω) = χyx(ω) = 0. +As a result, in-plane mirror symmetry disables helicity +dependent second-order interlayer polarization responses +(see column 2 of Table SI). Indeed, as discussed in the +main text, BLG/hBN possess in-plane mirror symmetry, +zeroing the helicity dependent χS response. +We now discuss the impact of point group rotational +symmetry. For instance, in the presence of Cnz (n ≥ 3) +symmetry, the LPE susceptibility tensor obeys χxx(ω) = +χyy(ω), χxy(ω) = −χyx(ω). +Similarly, in the presence +of C2x symmetry, the out of plane polarization has to +switch its sign ⟨Pz⟩ → −⟨Pz⟩ under the operation of C2x: +this means that the susceptibility components that pre- +serve their sign under C2x are forced to vanish χxx(ω) = +χyy(ω) = 0. +However, off-diagonal components of the +LPE susceptibility tensor are allowed. +Note that the +off-diagonal components of the LPE susceptibility ten- +sor in principle encode both helicity dependent [i.e. anti- +symmetric part: χxy(ω) − χyx(ω)] as well as nonhelical +responses to linearly polarized light [i.e. symmetric part: +χxy(ω) + χyx(ω)]. Crucially, the presence of just one of +the above symmetries alone, is compatible with both he- +lical as well as non-helical responses (see column 3 and 4 +of Table SI). However, when both C3z and C2x symme- +tries are present simultaneously (e.g., in pristine twisted +bilayer graphene), they ensure that only helicity depen- +dent LPE responses manifest (in the case of TRS, only χS +is non-zero); non-helical responses in such systems with +both C3z and C2x vanish. +Details of numerics +The evaluation of LPE responses for BLG/hBN shown +in the main text was carried out numerically by using +the BLG/hBN Hamiltonian in Eq. (14) of the main text +as well as the expressions for the various LPE responses +found in the main text. In so doing, we used an effec- +tive relaxation time τ = 1ps as an illustration that is +characteristic of ultraclean graphene based heterostruc- +tures [42]. Further, in numerically evaluating integrals +with delta functions and principal values, we compared +the LPE response expressions directly with the density +matrix in Eq. (S1), using their corresponding finite but +small η representations (and taking η → 1/τ). Moreover, +we found our numerically evaluated LPE responses con- +verged rapidly at low temperature. +While χI, χS depend on the interband transition con- +tours and χSC depends on the Fermi surface, the Fermi +sea responses χFS in Eqs. (7) and (8) of the main text, in +principle, depend on an entire band sum. Here we eval- +uated χFS systematically starting from the Dirac point +and moving outwards in momentum space; importantly +we achieved convergence rapidly for v∣kmax∣ ∼ 100 meV +within the range of validity of our low-energy Hamilto- +nian. The grid for the momentum space integration was +chosen to be 300 × 300 k-points for the interband terms. +For χSC we adopted a finer mesh with up to 900 × 900 +k-points to capture the sharpness of the Fermi surface at +low temperatures. +Interlayer polarization and interlayer voltage +While the photoinduced interlayer polarizations can be +directly obtained in Eq. (2) from the LPE susceptibilities +and oscillating electric fields of the incident EM irradi- +ation, these interlayer polarizations can also manifest as +an interlayer voltage, ∆U. Modeling the bilayer system +as a simple parallel plate capacitor separated by an in- +terlayer distance of 2d = 3.46 ˚A, we find +∆U = 2d +ε0 +δσ +2 = ⟨δP z +static⟩ +2ε0 +, +(S10) + +8 +where we used ε0 for the vacuum permittivity between +the graphene planes, and δσ (with units [C ⋅ cm−2]) is +the photoinduced difference between the surface charge +densities on the top and bottom layers +δσ = ⟨δP z +static⟩ +2d += ∑ +αβ +Re[EαEβ∗χαβ(Ω)] +(S11) +obtained directly from Eq. (2) in the main text. + diff --git a/TNE0T4oBgHgl3EQfUwCf/content/tmp_files/load_file.txt b/TNE0T4oBgHgl3EQfUwCf/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..4cb3ba814b65301f30aa4718c566ddf8d074a8cb --- /dev/null +++ b/TNE0T4oBgHgl3EQfUwCf/content/tmp_files/load_file.txt @@ -0,0 +1,638 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf,len=637 +page_content='Layer photovoltaic effect in van der Waals heterostructures Oles Matsyshyn, Ying Xiong, Arpit Arora, Justin C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Song∗ Division of Physics and Applied Physics, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore 637371, We argue that the layer electric polarization of noncentrosymmetric layered heterostructures can be generically controlled by light yielding a layer photovoltaic effect (LPE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' The LPE possesses a rich phenomenology and can arise from myriad distinct mechanisms displaying strong sensitivity to symmetry (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=', point group and time-reversal) as well as the presence/absence of a Fermi surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' We systematically classify these and unveil how LPE manifests for a range of light polarizations and even for unpolarized light.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' These unusual layer photoresponses can be realized in a range of layered heterostructures such as bilayer graphene aligned on hexagonal Boron Nitride and manifest sizeable layer polarization susceptibilities in the terahertz frequency range that can be used as novel means of bulk photodetection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Mechanical stacks of atomically thin van der Waals (vdW) materials enable to build quantum phases from the bottom-up with properties that go beyond that of its individual constituent components [1, 2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' A particu- larly striking example is the emergence of a layer degree of freedom in stacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Manipulating the relative degree with which each of the layers is charged, as character- ized by its static interlayer polarization, affords means to dramatically engineer bandstructure [3, 4], tune quan- tum geometric properties [5–8], as well as realize corrre- lated phases of matter [9–12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Since interlayer polariza- tion points out-of-plane, it is highly sensitive to vertical displacement fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' As a result, it has been traditionally controlled by toggling voltages sustained across a dual top and bottom gate sandwich architecture [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Here we argue that interlayer polarization in noncen- trosymmetric layered heterostructures can be generically controlled by light manifesting a layer photovoltaic ef- fect (LPE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Such LPE responses appear second-order in the incident light electromagnetic (EM) field, and, as we show below, come in myriad distinct types;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' by perform- ing a systematic classification we delineate LPEs with distinct symmetry constraints, light polarization depen- dence, as well as physical origins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Importantly, we find LPEs can arise from both resonant interband absorption as well as off-resonant virtual processes in either metallic or insulating states, providing versatile means to control interlayer polarization across different phases of matter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' We note that an example of LPE was recently pre- dicted in chiral vdW bilayers where the interband absorp- tion of circularly polarized light in such handed stacks induces a helicity dependent photoinduced interlayer po- larization [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Our work systematically shows that there exist a wide range of LPE interlayer responses beyond those known previously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' For instance, at high frequen- cies corresponding to interband transitions we find an injection-like process enables non-helical light to induce a (second-order) nonlinear LPE even in an achiral and non- centrosymmetric vdW layered heterostructure, see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Surprisingly, the injection-like process also produces an interlayer current even as the electrons do not possess hBN Top layer Bottom layer ⟨pz⟩ BLG 2d FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 1: Layer photovoltaic effect and interlayer polariza- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Photoinduced nonlinear interlayer polarization (here denoted by pz) in noncentrosymmetric van der Waals stacks;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' we term this the layer photovoltaic effect (LPE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Here an ex- ample of a non-centrosymmetric and achiral vdW structure is shown: bilayer graphene aligned with hexagonal Boron Ni- tride (BLG/hBN).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' These achiral structures possess LPE in- duced by non-helical light.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' an out-of-plane velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Instead, this intrinsic interlayer current arises from the pumping of the interlayer polar- ization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Additionally, even at low frequency without in- terband transitions, we find new types of large LPE re- sponses that can be induced in the metallic regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' As we will see, these latter metallic contributions arise from the momentum-space asymmetry in the layer polariza- tion of Bloch states on the Fermi surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' We anticipate that the LPEs we unveil can be used in novel bulk photodetection schemes that do not re- quire p-n junctions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Since many non-centrosymmetric vdW stacks are achiral possessing mirror symmetries that render helicity dependent LPE vanishing, non-helical LPEs are crucial in activating interlayer polarization re- sponses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Indeed, as we discuss below, the injection and metallic LPEs we unveil in our work can achieve giant susceptibility values in bilayer graphene (BLG) aligned with hexagonal boron nitride (hBN) heterostructures, or- ders of magnitude larger than those reported in chiral arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='02255v1 [cond-mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='mes-hall] 5 Jan 2023 2 stacks [13] and manifesting even for unpolarized light.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Interlayer polarization response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' We begin by directly examining the LPE response which is directly connected to the layer degree of freedom, l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' The interlayer polar- ization operator is ˆP z = ed∑ αℓ ˆl ∣αl⟩⟨αl∣ = ˆpzd, (1) where ˆl is the layer index operator: ˆl ∣αl⟩ = l ∣αl⟩, and ∣αl⟩ are orbitals localized on layer l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' For clarity, here we concentrate on a bilayer system with an interlayer distance 2d (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Our theory, however, is general and can be readily applied to multi-layered systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' When light is normally incident on the vdW stack (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 1), an out-of-plane static interlayer polarization can be induced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' To see this, first consider the Hamil- tonian: ˆH(k,t) = H0(k) + HE(k,t) where H0(k) is the bare Hamiltonian with ∣unk⟩ and ϵn(k) the correspond- ing Bloch states and eigenenergies;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' here and below, ro- man indices denote band indices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' HE(k,t) describes the light-matter interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' For a monochromatic EM field, HE(k,t) = eˆr ⋅ [EeiΩt + E∗e−iΩt]eηt [14, 15], with ˆr the position operator, η → 0+ an adiabatic turn-on parame- ter, and Ω is the frequency of the light.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' The LPE can be obtained from Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (1) as ⟨P z(t)⟩ = ∫ Tr[ˆρ(t) ˆPz]dk/(2π)2 where ˆρ is the density matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Here the evolution of the density matrix and the result- ing photoinduced interlayer polarization can be tracked in a standard perturbative fashion, see full details in Sup- plementary Information (SI).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' This produces a second- order nonlinear photoinduced static interlayer polariza- tion, ⟨δP z st⟩, characterized by an LPE susceptibility ten- sor χ(ω) as: ⟨δP z st⟩ = 2d∑ αβ Re[EαEβ∗χαβ(Ω)], (2) where α,β are spatial indices (x or y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' We will show there are five contributions to χαβ(ω) with distinct physical origins, symmetry properties, and phenomenology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' To proceed, it is useful to delineate between inter- band and intraband responses and for concreteness we will confine ourselves to band non-degenerate systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Three contributions comprise interband responses: In- jection (I), Shift (S) and Fermi-Sea (FS): χαβ inter(ω) = χαβ I (ω) + χαβ S (ω) + χαβ FS(ω), (3) where χI and χS describe LPE arising from resonant real interband excitations, whereas χFS is off-resonant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' The injection susceptibility χαβ I (ω) = τσαβ inter(ω)/4 with σαβ inter(ω) = πe2 ̵h2 ∑ n,m,k δ(ω + ωnm)Aα nmAβ mnfnmδPmn, (4) where fnm = f[ϵn(k)] − f[ϵm(k)] is the difference be- tween Fermi functions in different bands, ̵hωnm = ϵn(k)− SC FS shift injection reported in � \x13 \x13 \x17 \x13 this work � \x17 \x17 \x13 \x17 Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [13] TABLE I: LPE mechanisms in TRS preserving systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' � in- dicates non-helical mechanisms (induced by linearly polarized light), while � indicates helical responses (induced by circu- larly polarized light).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' \x13 denotes allowed, \x17 indicates forbid- den.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' SC and FS are semiclassical and Fermi Sea respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Note that χB (see text) is forbidden when TRS is preserved, but becomes activated when TRS is broken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' ϵm(k), and δPmn = pz mm − pz nn is the difference between layer polarization between the final and initial states states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' pz nm = ⟨unk∣ˆpz∣umk⟩ is a matrix element of the polarization operator and Aα nm = i⟨unk∣∂kα∣umk⟩ is the interband Berry connection [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Here τ is a phenomeno- logical relaxation time [17] that regularizes the χI re- sponse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' χI(ω) represents the first new result of our work and arises from the contrasting interlayer polarization when an electron transitions from state n,k → m,k: its polar- ization changes from pnn → pmm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' As we will argue be- low, this process also yields an anomalous photoinduced interlayer current, controlled by an interlayer conductiv- ity σαβ inter(ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' This anomalous interlayer current acts as a source that pumps the interlayer electric polarization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' As a result, χI(ω) grows with τ yielding large LPE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' This picture is similar to how bulk injection photocurrents are often understood as arising from a photoinduced acceler- ation [17, 18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Injection LPE contrasts with that of the shift LPE, χS(ω), recently discussed in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [13]: χαβ S (ω) = π e2 ̵h2 ∑ n,m,k δ(ω + ωnm)fnmAα nmMβ mn, (5) where Mβ mn = ∂β pmn ωmn − i∑ c [Aβ mc ¯pz cn ωcn − ¯pz mc ωmc Aβ cn], (6) and ¯pz nm = pz nm(1 − δnm).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' χS is intrinsic (τ independent) and arises from an interlayer coordinate shift that is non- vanishing in chiral media.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' In contrast to the other interband responses, χFS(ω) does not require real transitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Instead, it corresponds to nonlinear interlayer polarization sustained even for light with frequency below the bandgap of an insulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 3 It is written as χαβ FS(ω) = χαβ FS,1(ω) + χαβ FS,2(ω), where χαβ FS,1(ω) = e2 2̵h2 ∑ n,m,k Aα nmAβ mnfnmP [ pz mm − pz nn (ω + ωnm)2 ], (7) χαβ FS,2(ω) = i e2 ̵h2 ∑ n,m,k fmnAα nmMβ mnP [ 1 ω + ωnm ], (8) where P denotes the principal part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Strikingly, this off- resonant LPE survives even for insulators (unlike its pho- tocurrent counterpart [19, 20]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' As a result, we denote it Fermi Sea LPE since it arises from virtual processes be- tween completely occupied and unoccupied bands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' χFS proceeds in much the same fashion as that of the con- ventional dielectric response in insulators where similar virtual processes contribute to the dynamical screening.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Indeed, χFS can be understood as its nonlinear rectified counterpart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' The last new LPEs we unveil are intraband in na- ture: these depend on the presence of a Fermi sur- face and exhibit a low-frequency divergence characteris- tic of metallic responses in the clean limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' These are the semiclassical (SC) and Berry (B) LPE responses: χintra(ω) = χSC(ω) + χB(ω), with SC susceptibility: χαβ SC(ω) = e2 2̵h2 ∑ n,k ∂α∂βfn ω2 + τ −2 pz nn, (9) and Berry susceptibility: χαβ B (ω) = e2 ̵h2 ∑ n,m,k pz nmAα mni∂βfnm (ω + iτ −1)ωnm , (10) where intra-band responses are regularized with a relax- ation time τ [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Note that χB shares a similar density matrix origin to its counterpart in the more familiar but distinctly different photocurrent response (the Berry cur- vature dipole induced nonlinear Hall effect [22]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' χSC(ω) has a semiclassical origin: it arises from a DC shift (in momentum space) of the metallic Fermi sur- face induced by periodic driving;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' this enables to pick out a dipolar distribution of pnn(k) in momentum space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' χB(ω) arises from interband coherences sustained from the periodic driving;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' unlike the other responses we have discussed, χB(ω) has an odd-parity under time-reversal (c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' ∂βf term), vanishing in non-magnetic materials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' In what follows, we will focus on LPEs in time-reversal symmetry (TRS) preserving systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Intrinsic out-of-plane interlayer current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' We now pro- ceed to argue that the origin of the large injection LPE arises from an anomalous out-of-plane interlayer current induced by oscillating in-plane electric fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' To see this, we note that the interlayer electric current is naturally described by ˆjz = d ˆPz/dt = [ ˆPz, ˆH]/(i̵h) [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Computing the expectation value of the interlayer current, ⟨jz(t)⟩, we find Tr[ˆjzρ(t)] = 1 i̵hTr{[ ˆPz, ˆH]ρ(t)} = Tr[P z ˙ρ(t)], (11) where we have noted the cyclic property of the trace Tr{[A,B]C} = Tr{A[B,C]} as well as employed the Li- ouville equation i̵hdˆρ(t)/dt = [ ˆH(k,t), ˆρ(t)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' In order to isolate the rectified interlayer current, we focus on the period average jz rectified = [∫ T 0 dtlimη→0 ⟨jz(t)⟩]/T where T = 2π/Ω is the period of the drive EM field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' For a finite drive frequency Ω, this directly produces an out-of-plane interlayer current jz rectified = 2dRe[EαEβ∗σαβ inter(Ω)], (12) that is driven by an oscillating in-plane electric field E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Here σαβ inter(Ω) is the interlayer nonlinear conductivity found in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Interestingly, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (4) depends only on intrinsic band geometric quantities (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=', Aα nm, δPmn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' We note that second-order nonlinear photocurrent sus- ceptibilities have recently been the subject of intense in- vestigation [14, 15, 21, 24–34].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' These have concentrated on photocurrents formed from bulk itinerant electrons with a well-defined velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' In contrast, σαβ inter(Ω) de- scribes out-of-plane current in a vdW stack hosting elec- trons that do not have a z-direction velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Instead, the interlayer current can be understood as a type of electric polarization pump that injects polarization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Symmetry properties of LPE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' The mechanisms for LPE discussed above have distinct symmetry properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' To see this, we re-write Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (2) as: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='⟨δP z ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='st⟩/d = (EαE∗β + E∗αEβ) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='����������������������������������������������������������������������������������������������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='Linearly polarised light ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='2 [χαβ(Ω) + χαβ(−Ω)] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='����������������������������������������������������������������������������������������������������������������������������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='Re ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='+ (iEαE∗β − iE∗αEβ) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='����������������������������������������������������������������������������������������������������������������������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='Circularly polarised light ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='2i [χαβ(Ω) − χαβ(−Ω)] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='������������������������������������������������������������������������������������������������������������������������������������������������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='Im ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=',' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (13) displaying how the real (imaginary) parts of the suscep- tibility tensor control the response to linearly polarized (circularly polarized) irradiation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Recalling that under time-reversal symmetry we have Anm(k) = Amn(−k) and pnm(k) = pmn(−k) we obtain the non-helical (lin- ear) vs helical (circular) classification in Table I, namely: χI, χFS and χSC mediate responses to linearly polarized light but are helicity insensitive;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' χS, in contrast, only arises under circularly polarized irradiation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Naturally, inversion symmetry zeroes out all LPE responses, see SI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Point group symmetries also play a critical role in con- straining the LPE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' For instance, in-plane mirror symme- try My forces the off-diagonal components of the non- linear LPE susceptibility tensor to vanish: χxy(ω) = χyx(ω) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' This disables helicity dependent LPE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' As a result, achiral vdW stacks (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' ones with a mirror plane) do not possess a helicity dependent LPE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' As a result, 4 × 105 × 104 a) b) insulator metal μ ∼ 15meV μ ∼ 25meV 0 −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='5 ⋅ 105 semiclassical ∼ τ2 χ0(ω),[ e V 2] FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 2: Nonhelical LPE responses in vdW heterostructure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' BLG/hBN LPE susceptibility tensor χ(ω) = χ0(ω)I in the (a) insulating state (µ = 10 meV in the gap) (b) metallic state (µ = 20 meV) numerically evaluated using the low-energy Hamiltonian in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Both χI (orange) and χFS (green) contribute to the total response (purple) in the insulating state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' In the metallic state, an additional metallic χSC (yellow) emerges that dominates at low frequencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Right inset in both panels display the low-energy bandstructure of BLG/hBN;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' µ indicates the Fermi level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Left inset in panel b shows a zoom-in of the gray region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Parameters used: τ = 1 ps and ∆ = 30 meV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' comparing with Table I, in these systems we find that LPE proceed from χI, χFS and χSC only;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' χS vanishes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' In contrast, in chiral stacks that possess high crys- talline symmetries, the opposite can be true.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' The com- bination of Cnz (n ≥ 3) and C2x point-group rota- tional symmetries can render non-helical LPEs vanishing (see Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [13] for an explicit example in twisted bilayer graphene as well as full symmetry analysis in SI).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Of course, in chiral vdW stacks where at least one of these point group rotational symmetries are broken, both he- licity dependent and non-helical LPEs are allowed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Non-helical LPE response in BLG/hBN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' To exemplify the non-helical LPE response from χI, χFS and χSC in TRS preserving systems, we focus on an achiral vdW system: bilayer graphene aligned with hexagonal Boron nitride (BLG/hBN).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Aligned BLG/hBN breaks inver- sion symmetry, possesses C3z and My symmetries while breaking C2x (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' As a result, only non-helical LPE responses are allowed;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' χS vanishes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Indeed, the presence of both C3z and My guarantee χ(ω) = χ0(ω)I, allowing LPE to manifest for unpolarized light.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' We model the long-wavelength electronic excitations of BLG/hBN using an minimal low-energy Hamiltonian ˆH = ⎛ ⎜⎜⎜ ⎝ ∆/2 vπ† 0 v3π vπ −∆/2 γ1 0 0 γ1 0 vπ† v3π† 0 vπ 0 ⎞ ⎟⎟⎟ ⎠ , (14) where v = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='639eV nm is the Dirac velocity of graphene, v3 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='081eV nm characterizes trigonal warping, γ1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='4eV is the interlayer hopping, and π = ξkx + iky where ξ = ±1 is the valley index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Using Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (1) the polarization operator reads as ˆpz = diag(1,1,−1,−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Responses of different valleys are added.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' ∆ is the AB sublattice asym- metry induced by aligning one side with hBN thereby breaking inversion symmetry and opening a gap in the spectrum (see inset in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' In what follows we will concentrate on low frequencies up to the terahertz range where large LPEs manifest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' This is smaller than the en- ergy range (150−200meV) where superlattice effects from the hBN alignment ensue [35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' The LPE in BLG/hBN was numerically evaluated us- ing Eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (4), (7), (8), and (9) at low temperature and summed across both valleys for the electronic states in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (14);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' LPE susceptibilities are plotted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 2, see SI for a full discussion of the numerical details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' We find interband LPEs peak for frequencies close to the gap size, see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 2a where χI and χFS are plotted when the chem- ical potential is in the gap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' This indicates that both χI (orange) and χFS (green) are dominated by interband processes close to the band edge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Interestingly, when the chemical potential is moved into the conduction band (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 2b), a new metallic peak in the nonlinear LPE response emerges at low frequencies that corresponds to χSC (yellow);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' the interband LPE re- sponses still persist but now appear at higher frequencies due to Pauli blocking (see right inset).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' The metallic peak is particularly striking since it displays large responses (left inset) even for frequencies below any interband op- tical transition, as well as the opposite sign of suscepti- bilities as compared to the interband contributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' The LPE we unveil demonstrates how stacking can in- troduce new classes of responses not found in a single layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Indeed, we anticipate that χI and χSC can produce large LPE several orders of magnitude larger than that previously known, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=', in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' For instance, close to the interband peak in BLG/hBN heterostructures, we find a large interlayer surface charge density difference of order 1 nCcm−2 (this corresponds to an interlayer volt- age of order 2mV) can be sustained even for modest light total injection 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='6 fermi-sea 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='2 LPE 0 5 10 15 20 25 30 40 light frequency, w [meV]0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='8 total semiclassical 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='4 injection fermi-sea 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='8 0 10 CY 20 25 30 35 40 light frequency, w [meV]5 intensity of 1000W cm−2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' At very low frequencies, LPE is expected to be even more pronounced, yielding up to 5mV interlayer voltage under the same light intensity (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='2b left inset).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Such interlayer voltages can be readily detected using capacitive probes [36, 37] or scan- ning electron transistors [38], and are not just confined to BLG/hBN (that we have focussed on for a concrete illustration).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Indeed, we expect that LPEs are generic and will manifest in the wide zoo of noncentrosymmetric layered heterostructures available, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=', layered transition metal dichalcogenides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' In addition to providing novel means of photodetection (especially in the THz regime), given the large LPE susceptibilities, the photoinduced in- terlayer polarizations may even enable light-driven means of switching the electric polarization in a range of vdWs layered ferroelectrics that have recently become available [39–41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Acknowledgements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' This work was supported by the Ministry of Education Singapore under its MOE AcRF Tier 3 Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' MOE 2018-T3-1-002 and a Nanyang Technological University start-up grant (NTU-SUG).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' ∗ Electronic address: justinsong@ntu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='sg [1] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Balents, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Dean, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Efetov, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Young, Nature Physics 16, 725 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [2] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Song and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Gabor, Nature nanotechnology 13, 986 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [3] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Zhang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Tang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Girit, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Hao, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Martin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Zettl, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Crommie, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Shen, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Wang, Nature 459, 820 (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [4] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Tong, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Yu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Zhu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Xu, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Yao, Nature Physics 13, 356 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [5] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Yao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Xiao, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Niu, Physical Review B 77, 235406 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [6] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Song and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Kats, Nano Letters 16, 7346 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [7] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Yin, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Tan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Barcons-Ruiz, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Torre, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Watanabe, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Taniguchi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Song, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Hone, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Koppens, Science 375, 1398 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [8] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Ma, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Yuan, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Cheung, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Watanabe, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Taniguchi, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Zhang, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Xia, Nature 604, 266 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [9] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Chen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Jiang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Wu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Lyu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Li, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Chittari, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Watanabe, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Taniguchi, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Shi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Jung, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=', Nature Physics 15, 237 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [10] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Zhou, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Xie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Ghazaryan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Holder, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Ehrets, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Spanton, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Taniguchi, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Watanabe, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Berg, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Serbyn, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=', Nature 598, 429 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [11] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Zhou, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Xie, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Taniguchi, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Watanabe, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Young, Nature 598, 434 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [12] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' de la Barrera, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Aronson, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Zheng, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Watanabe, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Taniguchi, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Ma, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Jarillo-Herrero, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Ashoori, Nature Physics pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 1–5 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [13] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Gao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Zhang, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Xiao, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 124, 077401 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [14] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Aversa and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Sipe, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' B 52, 14636 (1995).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [15] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Sipe and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Shkrebtii, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' B 61, 5337 (2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [16] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Blount, in Solid State Physics, edited by F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Seitz and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Turnbull (Academic Press, New York, 1962), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 13, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 305–73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [17] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Ahn, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Guo, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Nagaosa, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' X 10, 041041 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [18] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' de Juan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Grushin, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Morimoto, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Moore, Nature Communications 8, 15995 (2017), ISSN 2041-1723.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [19] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Gao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Addison, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Mele, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Rappe, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 3, L042032 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [20] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Watanabe and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Yanase, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' X 11, 011001 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [21] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Matsyshyn and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Sodemann, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 123, 246602 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [22] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Sodemann and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Fu, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 115, 216806 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [23] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Resta and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Vanderbilt, Theory of Polarization: A Modern Approach (Springer Berlin Heidelberg, Berlin, Heidelberg, 2007), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 31–68, ISBN 978-3-540-34591-6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [24] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='-k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Shi, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Matsyshyn, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Song, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Sodemann Villadiego, arXiv e-prints (2022), 2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='03496.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [25] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Matsyshyn, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Song, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Sodemann Villadiego, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='-k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Shi, arXiv e-prints (2023), 2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='00811.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [26] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' von Baltz and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Kraut, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' B 23, 5590 (1981).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [27] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Sturman and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Fridkin, The photovoltaic and photorefractive effects in noncentrosymmetric materials, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 8 in Ferroelectricity and related phenomena (Gor- don and Breach Science Publishers, Philadelphia, 1992).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [28] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Brehm, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Young, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Zheng, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Rappe, The Journal of Chemical Physics 141, 204704 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [29] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Morimoto and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Nagaosa, Science Advances 2 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [30] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Nagaosa and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Morimoto, Advanced Materials 29, 1603345 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [31] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Parker, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Morimoto, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Orenstein, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Moore, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' B 99, 045121 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [32] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Matsyshyn, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Piazza, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Moessner, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Sodemann, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 127, 126604 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [33] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Kraut and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' von Baltz, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' B 19, 1548 (1979).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [34] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Matsyshyn, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Dey, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Sodemann, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Sun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' D 54, 404001 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [35] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Yankowitz, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Xue, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Cormode, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Sanchez- Yamagishi, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Watanabe, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Taniguchi, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Jarillo- Herrero, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Jacquod, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' LeRoy, Nature physics 8, 382 (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [36] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Young and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Levitov, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' B 84, 085441 (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [37] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Young, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Dean, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Meric, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Sorgenfrei, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Ren, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Watanabe, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Taniguchi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Hone, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Shepard, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Kim, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' B 85, 235458 (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [38] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Martin, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Feldman, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Weitz, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Allen, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Yacoby, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 105, 256806 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [39] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Zheng, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Ma, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Bi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' de la Barrera, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Liu, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Mao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Zhang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Kiper, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Watanabe, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Taniguchi, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=', Nature 588, 71 (2020), ISSN 1476-4687.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [40] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Wang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Yasuda, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Liu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Watanabe, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Taniguchi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Hone, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Fu, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Jarillo-Herrero, Na- ture Nanotechnology 17, 367 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [41] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Yasuda, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Wang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Watanabe, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Taniguchi, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Jarillo-Herrero, Science 372, 1458 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' [42] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Wang, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Meric, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Huang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Gao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Gao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Tran, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Taniguchi, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Watanabe, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Campos, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Muller, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=', Science 342, 614 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' 6 Supplementary Information for “Layer photovoltaic effect in van der Waals heterostructures” Density matrix and perturbation theory In this section, we discuss perturbative corrections to the density matrix in the presence of an irradiating electromagnetic field;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' this is used to directly compute the interlayer polarization responses found in the main text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Starting from the Liouville equation: i̵hdˆρ(t)/dt = [ ˆH(k,t), ˆρ(t)], with electric field employed in the length gauge ˆH(k,t) = H0(k) + eˆr ⋅ E(t)eηt, we compute per- turbative corrections for density matrix (DM): ˆρ = ˆρ(0) + ˆρ(1)+ ˆρ(2)+O(E3), where the index (0,1,2) represents the order of corrections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' The second order correction is given by: ρ(2) nm(t) = e2 ̵h2 ∬ dω2dω1 (2π)2 Eα(ω2)Eβ(ω1)e−i(ω1+ω2)t+2ηt × ⎧⎪⎪⎨⎪⎪⎩ δnm i∂βi∂αfn (ω1 + ω2 + 2iη)(ω2 + iη)+ + 1 ω1 + ω2 − ωnm + 2iη i∂β Aα nmfmn ω2 − ωnm + iη + + Aβ nmi∂αfmn (ω2 + iη)(ω1 + ω2 − ωnm + 2iη)+ 1 ω1 + ω2 − ωnm + 2iη ∑ c [ Aβ ncAα cmfmc ω2 − ωcm + iη − Aα ncAβ cmfcn ω2 − ωnc + iη ] ⎫⎪⎪⎬⎪⎪⎭ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (S1) Using Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (S1) and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (1) in the main text, the total polarization response is given by: ⟨P z⟩/d = ∞ ∑ i=0∫ dk (2π)2 ∑ nm pz nmρ(i) mn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (S2) In the main text, we focussed on the i = 2 contribution since it gives the leading DC (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' static) photoinduced in- terlayer polarization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' For monochromatic light, the DC contribution can be obtained after averaging the polar- ization over one period: ⟨fst⟩ = ∫ T 0 f(t)dt/T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' In so doing we concentrated on ω1 +ω2 = 0 contributions in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (S1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' The injection [Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (4)], shift [Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (5)], and Fermi-Sea [Eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (7) and (8)] contributions to the LPE in the main text can be obtained by plugging the terms in the third and fifth lines of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (S1) into Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (S2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' The contributions can be naturally delineated into resonant (includes delta functions, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=', for χI and χS) and off-resonant (involves principal parts, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=', χFS) in the limit of vanishingly small η → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Similarly, the SC and Berry contributions to the LPE can be obtained directly by substituting the terms in the second and fourth lines of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (S1) into Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (S2) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Interlayer current In this section, we provide a fuller account of the in- trinsic interlayer current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' First, for the convenience of the reader, we recall that the interlayer current operator can be written as: ˆjz = d ˆP z dt = 1 i̵h[ ˆP z, ˆH].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (S3) As a result, the time varying interlayer current can be directly evaluated as ⟨jz(t)⟩ = Tr[ˆjz ˆρ(t)] = 1 i̵hTr{[ ˆP z, ˆH]ˆρ(t)} = = 1 i̵hTr{ ˆP z[ ˆH, ˆρ(t)]}, (S4) where we used the cyclic permutation property of the trace: Tr{[A,B]C} = Tr{A[B,C]}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Importantly, the Liouville equation for the full system requires i̵hdˆρ(t)/dt = [ ˆH, ˆρ(t)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' As a result, we can di- rectly identify the interlayer current as ⟨jz(t)⟩ = Tr{ ˆP z ˙ρ(t)}, (S5) thus reproducing Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (11) of the main text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' The recti- fied component of the interlayer current can be directly obtained as the period average of ⟨jz(t)⟩: jz rectified = ∫ T 0 dt T lim η→0Tr{ ˆP z ˙ρn(t)} + O(E4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (S6) Crucially, it is the time derivative of the density matrix that controls the rectified interlayer current response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' By directly taking a time derivative of the second order cor- rection to the density matrix in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (S1), we find that there are only two terms in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (S1) that generate a finite contribution to the current above: i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' terms in ρ(2) nm(t) that originally corresponded to the injection and semiclassical LPEs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Notice, however, that the latter con- tribution to the intrinsic interlayer current in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (S6) [after contraction with electric fields and symmeterizing α ↔ β,ω ↔ −ω] displays a delta function peak at zero frequency δ(ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' As a result, at finite non-zero frequen- cies only the injection type response remains in the limit η → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Time-reversal, inversion and spatial symmetries This section will briefly dicuss constraints dictated by time-reversal, inversion and spatial symmetries for the LPE susceptibility tensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' We first focus on time-reversal symmetry (T ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' TRS produces the following relations for the matrix ele- ments of the Berry connection, polarization operator, Berry connections, as well as the band energy difference: 7 My C3z C2x C2x + C3z LPL, � \x13 \x13 \x13 \x17 CPL, � \x17 \x13 \x13 \x13 TABLE SI: This table summarises whether LPE responses are allowed (\x13) or forbidden (\x17) in systems exposed to inci- dent light with either circular (CPL) or linearly (LPL) po- larisation in the presence of spatial symmetries C2x, C3z and My.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Anm(k) = Amn(−k), ωnm(k) = ωnm(−k), pnm(k) = pmn(−k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Applying these relationships, we find that the Berry LPE response [in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (10)] vanishes in the pres- ence of TRS (since ∂kf is odd while Anm is even under k → −k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' In contrast, in the presence of TRS, SC LPE re- sponse persists.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' The interband LPE responses, however, have susceptibilities that obey χαβ FS(ω) = χαβ FS(−ω) = χβα FS(ω), (S7) χαβ S (ω) = −χαβ S (−ω), (S8) χαβ I (ω) = χαβ I (−ω) = χβα I (ω), (S9) in the presence of TRS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Note that in a similar way, the presence of inver- sion symmetry (I) demands: Anm(k) = −Anm(−k), ωnm(k) = ωnm(−k), pnm(k) = −pnm(−k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' As a result, in- version symmetry forces all second-order LPE responses to vanish;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' broken inversion symmetry is required to real- ize LPE responses as expected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Point group symmetry play an additional critical role in constraining LPE responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' For instance, in-plane mirror symmetry My forces off-diagonal components of the susceptibility tensor to vanish χxy(ω) = χyx(ω) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' As a result, in-plane mirror symmetry disables helicity dependent second-order interlayer polarization responses (see column 2 of Table SI).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Indeed, as discussed in the main text, BLG/hBN possess in-plane mirror symmetry, zeroing the helicity dependent χS response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' We now discuss the impact of point group rotational symmetry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' For instance, in the presence of Cnz (n ≥ 3) symmetry, the LPE susceptibility tensor obeys χxx(ω) = χyy(ω), χxy(ω) = −χyx(ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Similarly, in the presence of C2x symmetry, the out of plane polarization has to switch its sign ⟨Pz⟩ → −⟨Pz⟩ under the operation of C2x: this means that the susceptibility components that pre- serve their sign under C2x are forced to vanish χxx(ω) = χyy(ω) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' However, off-diagonal components of the LPE susceptibility tensor are allowed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Note that the off-diagonal components of the LPE susceptibility ten- sor in principle encode both helicity dependent [i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' anti- symmetric part: χxy(ω) − χyx(ω)] as well as nonhelical responses to linearly polarized light [i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' symmetric part: χxy(ω) + χyx(ω)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Crucially, the presence of just one of the above symmetries alone, is compatible with both he- lical as well as non-helical responses (see column 3 and 4 of Table SI).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' However, when both C3z and C2x symme- tries are present simultaneously (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=', in pristine twisted bilayer graphene), they ensure that only helicity depen- dent LPE responses manifest (in the case of TRS, only χS is non-zero);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' non-helical responses in such systems with both C3z and C2x vanish.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Details of numerics The evaluation of LPE responses for BLG/hBN shown in the main text was carried out numerically by using the BLG/hBN Hamiltonian in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (14) of the main text as well as the expressions for the various LPE responses found in the main text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' In so doing, we used an effec- tive relaxation time τ = 1ps as an illustration that is characteristic of ultraclean graphene based heterostruc- tures [42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Further, in numerically evaluating integrals with delta functions and principal values, we compared the LPE response expressions directly with the density matrix in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (S1), using their corresponding finite but small η representations (and taking η → 1/τ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Moreover, we found our numerically evaluated LPE responses con- verged rapidly at low temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' While χI, χS depend on the interband transition con- tours and χSC depends on the Fermi surface, the Fermi sea responses χFS in Eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (7) and (8) of the main text, in principle, depend on an entire band sum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Here we eval- uated χFS systematically starting from the Dirac point and moving outwards in momentum space;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' importantly we achieved convergence rapidly for v∣kmax∣ ∼ 100 meV within the range of validity of our low-energy Hamilto- nian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' The grid for the momentum space integration was chosen to be 300 × 300 k-points for the interband terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' For χSC we adopted a finer mesh with up to 900 × 900 k-points to capture the sharpness of the Fermi surface at low temperatures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Interlayer polarization and interlayer voltage While the photoinduced interlayer polarizations can be directly obtained in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (2) from the LPE susceptibilities and oscillating electric fields of the incident EM irradi- ation, these interlayer polarizations can also manifest as an interlayer voltage, ∆U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' Modeling the bilayer system as a simple parallel plate capacitor separated by an in- terlayer distance of 2d = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content='46 ˚A, we find ∆U = 2d ε0 δσ 2 = ⟨δP z static⟩ 2ε0 , (S10) 8 where we used ε0 for the vacuum permittivity between the graphene planes, and δσ (with units [C ⋅ cm−2]) is the photoinduced difference between the surface charge densities on the top and bottom layers δσ = ⟨δP z static⟩ 2d = ∑ αβ Re[EαEβ∗χαβ(Ω)] (S11) obtained directly from Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} +page_content=' (2) in the main text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TNE0T4oBgHgl3EQfUwCf/content/2301.02255v1.pdf'} diff --git a/UdE3T4oBgHgl3EQfagqD/content/tmp_files/2301.04507v1.pdf.txt b/UdE3T4oBgHgl3EQfagqD/content/tmp_files/2301.04507v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..f958baed5c6e53d23a1d7b1a892c11c7d9283006 --- /dev/null +++ b/UdE3T4oBgHgl3EQfagqD/content/tmp_files/2301.04507v1.pdf.txt @@ -0,0 +1,498 @@ +Realization of multiple charge density waves in NbTe2 at the +monolayer limit +Yusong Bai1, Zemin Pan1, Jinghao Deng1, Xiaoyu Lin1, Tao Jian1, Chao Zhu1, Da Huo1, Zhengbo +Cheng1, Ping Cui3, Zhenyu Zhang3, Qiang Zou2*, Chendong Zhang1, 4* +1School of Physics and Technology, Wuhan University, Wuhan 430072, China +2Department of Physics and Astronomy, West Virginia University, WV 26506, USA +3International Center for Quantum Design of Functional Materials (ICQD), Hefei National Laboratory +for Physical Sciences at the Microscale, University of Science and Technology of China, Hefei 230026, +China +4Wuhan Institute of Quantum Technology, Wuhan 430206, China +*Correspondence and requests for materials should be addressed to: +cdzhang@whu.edu.cn (C.D.Z.), qzou.iphy@gmail.com (Q.Z.) + +Abstract: Layered transition-metal dichalcogenides (TMDCs) down to the +monolayer (ML) limit provide a fertile platform for exploring charge-density +waves (CDWs). Though bulk NbTe2 is known to harbor a single axis 3 × 1 CDW +coexisting with non-trivial quantum properties, the scenario in the ML limit is still +experimentally unknown. In this study, we unveil the richness of the CDW phases +in ML NbTe2, where not only the theoretically predicted 4 × 4 and 4 × 1 phases, +but also two unexpected √𝟐𝟖 × √𝟐𝟖 and √𝟏𝟗 × √𝟏𝟗 phases, can be realized. For +such a complex CDW system, we establish an exhaustive growth phase diagram +via systematic efforts in the material synthesis and scanning tunneling microscope +characterization. Moreover, we report that the energetically stable phase is the +larger scale order (√𝟏𝟗 × √𝟏𝟗), which is surprisingly in contradiction to the prior +prediction (4 × 4). These findings are confirmed using two different kinetic +pathways, i.e., direct growth at proper growth temperatures (T), and low-T growth +followed by high-T annealing. Our results provide a comprehensive diagram of the +“zoo” of CDW orders in ML 1T-NbTe2 for the first time and offer a new material +platform for studying novel quantum phases in the 2D limit. + +KEYWORDS: charge-density waves, monolayer 1T-NbTe2, molecular beam epitaxy, +scanning tunneling microscopy and spectroscopy + + +Charge density waves (CDWs), a collective electronic phenomenon with atomic- +scale periodic modulation in terms of both lattice and charge degrees of freedom,1 have +been discovered in many layered transition-metal dichalcogenides (TMDCs).2,3 On +reducing dimensionality, the electron–electron and electron–phonon interactions can be +markedly enhanced due to the quenched screening,2-4 particularly in the monolayer (ML) +limit, where interactions in the vertical direction were further removed. As a +consequence, exotic quantum phenomena associated with the new CDW orders emerge +in such a ML system. For instance, ML VSe2 shows a new symmetry-broken √7 × √3 +charge order with an enhanced CDW transition temperature, in contrast to the 4 × 4 +CDW in bulk VSe2.5,6 +The group V dichalcogenides MX2 (M = V, Nb, Ta; X = S, Se, Te) are one of the best- +known material families to harbor assorted 2D CDW orders.7-12 Unlike other relatives +in this family, niobium ditelluride (NbTe2) has been less explored in terms of both +experiment and theory. Previous experiments with bulk NbTe2 have shown the +coexistence of a 3 × 1 CDW order and superconductivity,13-16 as well as anomalous +magnetoresistance behaviors.16,17 Owing to the shorter Te-Te spacing between layers, +the interlayer interaction in Te-based TMDCs is believed to be stronger than that in Se/S +compounds.18 Therefore, the telluride layered materials could show dramatic thickness- +dependent structural, electronic, magnetic, and topological phase transitions.19–22 +Recently, pioneering theoretical effort has predicted that the ML NbTe2 may exhibit 4 +× 4, 4 × 1, 3 × 3, and 3 × 1 CDW orders. The 4 × 4 phase is the most stable among them, +though the differences in their formation energies are fairly small.23 However, despite + +successes in materials fabrication,24,25 the charge density modulations in NbTe2 at the +ML limit remain experimentally unexplored. +Herein, we report the successful epitaxial growth of ML NbTe2 on a bilayer +graphene/SiC substrate. Using scanning tunneling microscopy/spectroscopy (STM/S), +the triaxial 4 × 4 and uniaxial 4 × 1 CDW, which are absent in the bulk NbTe2, are found +in the ML limit agreeing with the calculations.23 For both phases, a correlation gap with +remarkable magnitude (or an asymmetric line shape in the vicinity of Fermi level) and +the intensity reversal of local density-of-state (DOS) maps at opposite biases are +observed, which strongly support their CDW origin. Moreover, we discover unexpected +triaxial CDW orders with much larger periodicities (i.e., √28 × √28 and √19 × √19). +Via systematic efforts in the synthesis and the STM characterization, we establish a +comprehensive growth diagram for such a complex CDW system. Confirmed by using +two different kinetic pathways, we conclude that the theoretically unforeseen +incommensurate CDW orders are generally more thermally favored. The pure phase of +√19 × √19 CDW is attained at substantially high growth (or annealing) temperature (> +400 ℃), manifesting itself as the most energetically stable CDW in ML NbTe2. The +successful controlled growth of multiple types of CDW orders in ML 1T-NbTe2 +provides a prospective material system for exploring novel quantum phases in the 2D +limit. +Results and discussion +ML NbTe2 is in a high-symmetry octahedral (1T) structure at room temperature,24,25 +where the hexagonally arranged Nb atoms are sandwiched by two layers of Te atoms in +an octahedral coordination, as sketched in Figure 1a (top view in the upper panel and +side view in the lower panel). Using molecular beam epitaxy (MBE), we successfully + +grew ML NbTe2 on a bilayer graphene (BLG)-terminated 6H-SiC(0001) substrate. +Figure 1b shows a typical large-scale STM image of sub-ML NbTe2. The apparent +height of the ML NbTe2 film was 8.9 Å, as shown in the inset of Figure 1b. The room- +temperature atomically resolved STM image (Figure 1c) clearly shows a well-ordered +hexagonal lattice with a lattice constant of a = 3.64 ± 0.04 Å. In situ reflection high- +energy electron diffraction (RHEED) patterns were measured to monitor the growth +process of NbTe2 on BLG. After 1 hour of growth (0.9 ML coverage), two sets of strikes +appeared (Figure 1d), which suggests that NbTe2 has its own lattice structure rather than +taking on the lattice structure of BLG. Using BLG as a reference, the lattice constant of +ML NbTe2 was estimated to be a = 3.68 Å, which is consistent with both our STM +results (Figure 1c) and the reported value of 3.66 Å for NbTe2 nanoplates.26 X-ray +photoemission spectroscopy (XPS) measurements were performed to characterize the +formation of NbTe2. As shown in Figure S1, the Nb 3d5/2 (201.9 eV) and Nb 3d3/2 (204.7 +eV) peaks from Nb4+, as well as the Te 3d5/2 (573.2 eV) and Te 3d3/2 (581.4 eV) peaks +from Te2−, were detected, which is consistent with previous work.27 The combined STM, +RHEED, and XPS characterization results confirm the successful epitaxial growth of a +high-quality ML NbTe2 film on a BLG-terminated 6H-SiC(0001) substrate. +We fabricated ML NbTe2 with a series of growth temperatures (Tgrowth) and +characterized those samples at 4.7 K in an MBE-STM integrated system. Figure 2a +shows a 30 × 30 nm2 STM image of 0.8 ML NbTe2 with Tgrowth = 250 ℃ that displays +the coexistence of three superstructures: (1) a well-ordered hexagonal superstructure +with a periodicity of 1.45 nm (marked by the solid cyan rhombus and labeled as a 4 × + +4 superstructure); (2) a 1D striped superstructure with a periodicity of 1.28 nm along +the direction perpendicular to the stripes (labeled as a 4 × 1 superstructure); and (3) a +disordered hexagonal superstructure that exhibits a local triaxial period but without +long-range order (marked by the dashed blue rhombus and labeled as a disordered √28 +× √28 superstructure). Increasing the growth temperature to 300 ℃ resulted in more 4 +× 1 phase and ordered √28 × √28 phase with a periodicity of 1.91 nm, but the 4 × 4 +phase disappeared (Figure 2b and Figure S2). Interestingly, the 4 × 1 phase was not +found after the Tgrowth was increased to 350 ℃, while the √28 × √28 phase continued +to increase and coexisted with a new hexagonal superstructure with a periodicity of +1.58 nm (marked by the solid red rhombus and labeled as a √19 × √19 superstructure +in Figure 2c and Figure S2). Further increasing the substrate temperature (400-450 ℃) +yielded a pure √19 × √19 phase (Figure 2d). We summarized the evolution of the four +superstructures in ML NbTe2 as a function of growth temperature in Figure 2e. The +surface morphology evolution is shown to be driven by Tgrowth, indicating that the 4 × +4, 4 × 1 and √28 × √28 phases are metastable, while the √19 × √19 phase is energy +stable. +To examine the phase stability of those superstructures, we carried out a post- +annealing process on the low-temperature grown ML NbTe2. Figure 3a shows an STM +image of ML NbTe2 grown and annealed at 250 ℃, which is in good agreement with +the results in Figure 2a. Different from the direct growth case, the 4 × 1 superstructure +was not observed when we fully annealed the sample to 300 ℃ (Figure 3b, more results +are seen in Figure S3). This suggests the Te-rich conditions may be essential to prepare + +the 4 × 1 phase (by lowering the formation enthalpy). Further increasing the annealing +temperature to 350 ℃ led to a higher coverage of the √28 × √28 superstructure and an +appearance of the √19 × √19 superstructure (Figure 3c), same as Figure 2c. A pure +√19 × √19 phase was generated after the film was annealed over 400 ℃ (Figure 3d), +similar to the results of the direct growth cases with growth temperatures from 400 to +450 ℃ (Figure 2d). The evolution of those superstructures upon post-annealing +confirms the instability of the 4 × 4, 4 × 1 and √28 × √28 phases, which was further +verified by another post-annealing process on the sample grown at 300 ℃ (see Figure +S3 for details). +In ref. 23, first-principles calculations suggested the appearance of 4 × 4, 4 × 1, 3 × +3, and 3 × 1 CDW orders. Our experiments realized the predicted 4 × 4 and 4 × 1 +superstructures, but not the 3 × 3 and 3 × 1 orders. To validate their CDW origins, we +performed further STM/S investigations on those two phases. Figure 4a shows an +atomic-resolved STM image of the 4 × 1 superstructure. Remarkably, a 4a periodicity +(~1.45 nm) along the direction of the atomic lattice can be distinguished (Figure 4b). +This 4a modulation is confirmed by the two additional peaks circled in green in the +fast-Fourier transform (FFT) image (Figure 4c). Moreover, we noticed that the +distortion of the topmost Te atoms was significantly large (see Figure 4a). To give a +more quantitative description of the lattice distortion, we labeled the four atoms in the +4a period as Te1, Te2, Te3, and Te4 (marked in Figure 4b) and summarized the distance +between adjacent Te atoms in Table S1. It was found that the distance between Te1-Te2 +and Te4-Te1 (Te2-Te3 and Te3-Te4) was longer (shorter) than the Te-Te distance of an + +ideal octahedral structure, and the variation in the Te-Te bond length was up to 20%, +which is much larger than the 1-7% distortion in conventional ML TMDs within the +CDW order.3,28 +We further examined the electronic structures of the 4 × 1 superstructure by STS. +The large-scale tunneling spectra show a suppression of DOS around the Fermi level +and display spatial modulation (homogeneity) in the direction perpendicular (parallel) +to the stripes (see Figure S4). The magnified averaged dI/dV spectroscopy is shown in +Figure 4d, where a soft gap of ~20 meV at the Fermi level was observed. This gap size +is comparable to that of other typical CDW TMDs at the ML limit.9,11,29 Figure 4e and +4f show the dI/dV maps of the same area obtained at -30 mV and +30 mV. The stripe +modulations with an apparent intensity reversal are distinguished (see Figure S5 for +more dI/dV maps). Generally, a spatial phase flip in the conductance maps at opposite +energies near the Fermi level is a hallmark of the classic Peierls CDW, for which the +intensity of the charge accumulation region is enhanced under negative bias, whereas +the charge depletion region shows enhanced intensity under positive bias. Hence, the +contrast reversal between the filled and empty states in the 4 × 1 phase strongly suggests +that it originates from a CDW order.30–32 Thus, the 4a periodical modulation and atomic +lattice distortion, the gap structure around the Fermi level, and the spatial phase flip in +the conductance maps collectively support a CDW nature of the 4 × 1 superstructure +Similar CDW behaviors are also observed for the 4 × 4 superstructure. Figure 5a +shows an STM image with atomic resolution, which captures a 1 × 1 hexagonal unit +(marked in white) and 4 × 4 reconstructed patterns (marked in cyan). The line profile + +along the black arrow in Figure 5a displays 4a modulation, as shown in Figure 5b. The +FFT image is shown in Figure 5c, displaying alignment between the 1 × 1 lattice peaks +(marked in white circles) and the 4 × 4 spots (marked in cyan circles). In Figure 5d, the +tunneling spectrum also shows a suppression of DOS near the Fermi level and the +magnified dI/dV curve exhibits a particle-hole asymmetric V-shape gap (inset of Figure +5d and more spectroscopic results are seen in Figure S6). Such a spectroscopic feature +(i.e. lack of sharp edge) is also commonly seen in other CDW systems like the CsV3Sb5 +and VSe2, presumably due to the partially gapped Fermi surface.33,34 Moreover, the +dI/dV maps of the same area measured at opposite biases exhibit intensity reversal as +well. As indicated by the dashed cyan lines superimposed on Figure 5e and 5f, regions +with the maximum charge intensity at -150 mV (Figure 5f) turn to the lowest charge +intensity at +150 mV (Figure 5e). These results thus support the CDW nature of the 4 +× 4 superstructure. +Our experiments not only confirm the 4 × 1 and 4 × 4 predictions, but also reveal the +much larger-scale superstructures with irrational periodicities (namely, the √28 × √28 +and √19 × √19 ) that are more stable, including the most energetically stable phase +(√19 × √19). Noticed that the √19 × √19 structure was recently observed in the ML +1T-TaTe2, showing nearly identical STM morphology features with ours, and has been +unambiguously confirmed as an enlarged David-star CDW.13,35 The √28 × √28 +structure was also found to be a CDW order according to our private communications +with Zeng. C. G et, al. Given the much larger length scales in these two phases, more +defective features will occur naturally within the periodic unit cells. Thus, more efforts + +are needed to fully optimize their atomic structures and electronic properties. It is a +demanding task to be carried out in future, but beyond the present scope. Despite the +discrepancy in experiments and calculations,23 it is for certain that the ML-NbTe2 is +endowed with rich and diversified electron/phonon correlation modes, which are +greatly different from the bulk case.13,14 In bulk NbTe2, the interlayer Te-Te interaction +is known to be rather strong which can even result in additional intralayer chalcogen- +to-metal transfer (1/3e per unit) forming the trimerized Nb ions chains.18,36 When the +thickness is down to a ML limit, the absent interlayer interaction and enhanced electron +correlation may suppress the movement of electrons, leading to the emergence of new +charge orders. +Conclusion +In conclusion, we reported the MBE growth and STM/S characterization of NbTe2 +thin films down to the ML limit. Multiple CDW orders are found, which are entirely +different from the known bulk case and deviate from the recent theoretical predictions +as well.13,14,23 In particular, we first confirmed the 4 × 4 and 4 × 1 CDWs predictions +by thorough spectroscopic measurements. Moreover, we revealed unexpected CDW +orders with larger length scales, including the √28 × √28 and the √19 × √19 that is, +surprisingly, the stable phase. Precise control of the complex of multiple CDW phases +can be achieved owing to the establishment of a comprehensive growth phase diagram +of such an uncharged monolayer TMDC material. Our findings provide a promising +platform for exploring novel properties associated with emerging CDW orders, such as +nontrivial topology and magnetism. + +Methods +Growth of the ML NbTe2 films. Sample growth was carried out using a home-built +MBE system with a base pressure of ~1.2 × 10-10 Torr. The 6H-SiC(0001) wafer was +first degassed at 650 °C for several hours and then flashed to 1300 °C for 45 cycles to +obtain the bilayer graphene-terminated surface.37 High-purity Nb (99.5%) and Te +(99.999%) were evaporated from an electron-beam evaporator and a standard Knudsen +cell, respectively. The flux ratio between Nb and Te was ~ 1:20. The substrate was +heated by means of direct current and calibrated with an infrared spectrometer. The +growth process was monitored by in situ RHEED operated at 20 kV. For ex situ STM/S +measurements, a vacuum vessel with a base pressure of ~ 5 × 10-10 Torr was used to +protect the sample from oxidation during the transition. +Scanning tunneling microscopy and spectroscopy. The STM/S measurements were +carried out using a commercial Unisoku 1300 LT-STM system at 4.8 K (base pressure: +< 1 × 10-10 Torr). Electrochemically etched tungsten tips were cleaned in situ with +electron-beam bombardment and calibrated on a clean Cu(111) surface before all +measurements. The dI/dV spectra were obtained at a constant tip-sample distance by +using a standard lock-in technique with a modulation voltage of 973 Hz. +X-ray photoelectron spectroscopy. XPS spectra were acquired using a Thermo Fisher +ESCALAB 250Xi instrument employing a monochromatic Mg K Alpha (1.254 keV) +X-ray source in an ultrahigh vacuum atmosphere. The binding energies were calibrated +using the C(1s) carbon peak (284.8 eV). +Acknowledgements + +This work was supported by the National Key R&D Program of China +(2018FYA0305800 and 2018YFA0703700), the National Natural Science Foundation +of China (11974012 and 12134011), and the Strategic Priority Research Program of +Chinese Academy of Sciences (XDB30000000). We also thank Dr. Yong Liu from +Wuhan University for the XPS measurements. +Electronic Supplementary Material: Supplementary material (including XPS +results, STM images of the √28 × √28 and √19 × √19 CDW orders, large-scale +tunneling spectra of the 4 × 1 and 4 × 4 phases, dI/dV maps of the 4 × 1 phase) is +available in the online version of this article at http://XXX. + +References +(1) Grüner, G. The Dynamics of Charge-Density Waves. Rev. Mod. Phys. 1988, 60, +1129–1181. +(2) Wilson, J. A.; Di Salvo, F. J.; Mahajan, S. Charge-Density Waves and Superlattices +in the Metallic Layered Transition Metal Dichalcogenides. Advances in Physics +2001, 50, 1171–1248. +(3) Rossnagel, K. On the Origin of Charge-Density Waves in Select Layered +Transition-Metal Dichalcogenides. J. Phys.: Condens. Matter 2011, 23, 213001. +(4) Xu, Z.; Yang, H.; Song, X.; Chen, Y.; Yang, H.; Liu, M.; Huang, Z.; Zhang, Q.; +Sun, J.; Liu, L.; Wang, Y. Topical Review: Recent Progress of Charge Density +Waves in 2D Transition Metal Dichalcogenide-Based Heterojunctions and Their +Applications. Nanotechnology 2021, 32, 492001. +(5) Chen, P.; Pai, W. W.; Chan, Y.-H.; Madhavan, V.; Chou, M. Y.; Mo, S.-K.; +Fedorov, A.-V.; Chiang, T.-C. Unique Gap Structure and Symmetry of the Charge +Density Wave in Single-Layer VSe2. Phys. Rev. Lett. 2018, 121, 196402. +(6) Duvjir, G.; Choi, B. K.; Jang, I.; Ulstrup, S.; Kang, S.; Thi Ly, T.; Kim, S.; Choi, +Y. H.; Jozwiak, C.; Bostwick, A.; Rotenberg, E.; Park, J.-G.; Sankar, R.; Kim, K.- +S.; Kim, J.; Chang, Y. J. Emergence of a Metal–Insulator Transition and High- +Temperature Charge-Density Waves in VSe2 at the Monolayer Limit. Nano Lett. +2018, 18, 5432–5438. +(7) van Efferen, C.; Berges, J.; Hall, J.; van Loon, E.; Kraus, S.; Schobert, A.; Wekking, +T.; Huttmann, F.; Plaar, E.; Rothenbach, N.; Ollefs, K.; Arruda, L. M.; Brookes, +N.; Schönhoff, G.; Kummer, K.; Wende, H.; Wehling, T.; Michely, T. A Full Gap +above the Fermi Level: The Charge Density Wave of Monolayer VS2. Nat. +Commun. 2021, 12, 6837. +(8) Liu, M.; Wu, C.; Liu, Z.; Wang, Z.; Yao, D.-X.; Zhong, D. Multimorphism and +Gap Opening of Charge-Density-Wave Phases in Monolayer VTe2. Nano Res. +2020, 13, 1733–1738. +(9) Ugeda, M. M.; Bradley, A. J.; Zhang, Y.; Onishi, S.; Chen, Y.; Ruan, W.; Ojeda- + +Aristizabal, C.; Ryu, H.; Edmonds, M. T.; Tsai, H.-Z.; Riss, A.; Mo, S.-K.; Lee, +D.; Zettl, A.; Hussain, Z.; Shen, Z.-X.; Crommie, M. F. Characterization of +Collective Ground States in Single-Layer NbSe2. Nat. Phys. 2016, 12, 92–97. +(10) Lin, H.; Huang, W.; Zhao, K.; Lian, C.; Duan, W.; Chen, X.; Ji, S.-H. Growth of +Atomically Thick Transition Metal Sulfide Filmson Graphene/6H-SiC(0001) by +Molecular Beam Epitaxy. Nano Res. 2018, 11, 4722–4727. +(11) Ryu, H.; Chen, Y.; Kim, H.; Tsai, H.-Z.; Tang, S.; Jiang, J.; Liou, F.; Kahn, S.; Jia, +C.; Omrani, A. A.; Shim, J. H.; Hussain, Z.; Shen, Z.-X.; Kim, K.; Min, B. I.; +Hwang, C.; Crommie, M. F.; Mo, S.-K. Persistent Charge-Density-Wave Order in +Single-Layer TaSe2. Nano Lett. 2018, 18, 689–694. +(12) Feng, J.; Tan, A.; Wagner, S.; Liu, J.; Mao, Z.; Ke, X.; Zhang, P. Charge +Modulation and Structural Transformation in TaTe2 Studied by Scanning +Tunneling Microscopy/Spectroscopy. Appl. Phys. Lett. 2016, 109, 021901. +(13) Battaglia, C.; Cercellier, H.; Clerc, F.; Despont, L.; Garnier, M. G.; Koitzsch, C.; +Aebi, P.; Berger, H.; Forró, L.; Ambrosch-Draxl, C. Fermi-Surface-Induced +Lattice Distortion in NbTe2. Phys. Rev. B 2005, 72, 195114. +(14) Feng, H.; Xu, Z.; Zhuang, J.; Wang, L.; Liu, Y.; Xu, X.; Song, L.; Hao, W.; Du, +Y. Role of Charge Density Wave in Monatomic Assembly in Transition Metal +Dichalcogenides. Adv. Funct. Mater. 2019, 29, 1900367. +(15) Nagata, S.; Abe, T.; Ebisu, S.; Ishihara, Y.; Tsutsumi, K. Superconductivity in the +Metallic Layered Compound NbTe2. Journal of Physics and Chemistry of Solids +1993, 54, 895–899. +(16) Zhang, X.; Luo, T.; Hu, X.; Guo, J.; Lin, G.; Li, Y.; Liu, Y.; Li, X.; Ge, J.; Xing, +Y.; Zhu, Z.; Gao, P.; Sun, L.; Wang, J. Superconductivity and Fermi Surface +Anisotropy in Transition Metal Dichalcogenide NbTe2. Chinese Phys. Lett. 2019, +36, 057402. +(17) Chen, H.; Li, Z.; Fan, X.; Guo, L.; Chen, X. Quantum Linear Magnetoresistance +in NbTe2. Solid State Communications 2018, 275, 16–20. +(18) Canadell, E.; Jobic, S.; Brec, R.; Rouxel, J.; Whangbo, M. H. Journal of Solid State +Chemistry 1992, 99, 189-199. + +(19) Hwang, J.; Kim, K.; Zhang, C.; Zhu, T.; Herbig, C.; Kim, S.; Kim, B.; Zhong, Y.; +Salah, M.; El-Desoky, M. M.; Hwang, C.; Shen, Z.-X.; Crommie, M. F.; Mo, S.- +K. Large-Gap Insulating Dimer Ground State in Monolayer IrTe2. Nat. Commun. +2022, 13, 906. +(20) Cui, J.; Li, P.; Zhou, J.; He, W.-Y.; Huang, X.; Yi, J.; Fan, J.; Ji, Z.; Jing, X.; Qu, +F.; Cheng, Z. G.; Yang, C.; Lu, L.; Suenaga, K.; Liu, J.; Law, K. T.; Lin, J.; Liu, +Z.; Liu, G. Transport Evidence of Asymmetric Spin–Orbit Coupling in Few-Layer +Superconducting 1Td-MoTe2. Nat. Commun. 2019, 10, 2044. +(21) Wen, Y.; Liu, Z.; Zhang, Y.; Xia, C.; Zhai, B.; Zhang, X.; Zhai, G.; Shen, C.; He, +P.; Cheng, R.; Yin, L.; Yao, Y.; Getaye Sendeku, M.; Wang, Z.; Ye, X.; Liu, C.; +Jiang, C.; Shan, C.; Long, Y.; He, J. Tunable Room-Temperature Ferromagnetism +in Two-Dimensional Cr2Te3. Nano Lett. 2020, 20, 3130–3139. +(22) Zhuo, W. Z.; Lei, B.; Zhu, C. S.; Sun, Z. L.; Cui, J. H.; Wang, W. X.; Wang, Z. +Y.; Wu, T.; Ying, J. J.; Xiang, Z. J.; Chen, X. H. Thickness-Dependent Electronic +Structure in Layered ZrTe5 down to the Two-Dimensional Limit. Phys. Rev. B +2022, 106, 085428. +(23) Zhang, K.; Zou, N.; Ren, Y.; Wu, J.; Si, C.; Duan, W. Realization of Coexisting +Charge Density Wave and Quantum Spin/Anomalous Hall State in Monolayer +NbTe2. Adv. Funct. Mater. 2022, 32, 2111675. +(24) Zhou, J.; Lin, J.; Huang, X.; Zhou, Y.; Chen, Y.; Xia, J.; Wang, H.; Xie, Y.; Yu, +H.; Lei, J.; Wu, D.; Liu, F.; Fu, Q.; Zeng, Q.; Hsu, C.-H.; Yang, C.; Lu, L.; Yu, T.; +Shen, Z.; Lin, H.; Yakobson, B. I.; Liu, Q.; Suenaga, K.; Liu, G.; Liu, Z. A Library +of Atomically Thin Metal Chalcogenides. Nature 2018, 556, 355–359. +(25) Wu, R.; Tao, Q.; Dang, W.; Liu, Y.; Li, B.; Li, J.; Zhao, B.; Zhang, Z.; Ma, H.; +Sun, G.; Duan, X.; Duan, X. Van Der Waals Epitaxial Growth of Atomically Thin +2D Metals on Dangling‐Bond‐Free WSe2 and WS2. Adv. Funct. Mater. 2019, 29, +1806611. +(26) Li, J.; Zhao, B.; Chen, P.; Wu, R.; Li, B.; Xia, Q.; Guo, G.; Luo, J.; Zang, K.; +Zhang, Z.; Ma, H.; Sun, G.; Duan, X.; Duan, X. Synthesis of Ultrathin Metallic +MTe2 (M = V, Nb, Ta) Single-Crystalline Nanoplates. Adv. Mater. 2018, 30, + +1801043. +(27) Sheraz, A.; Mehmood, N.; Çiçek, M. M.; Ergün, İ.; Rasouli, H. R.; Durgun, E.; +Kasırga, T. S. High Elasticity and Strength of Ultra-Thin Metallic Transition Metal +Dichalcogenides. Nanoscale Adv. 2021, 3, 3894–3899. +(28) Zhu, X.; Cao, Y.; Zhang, J.; Plummer, E. W.; Guo, J. Classification of Charge +Density Waves Based on Their Nature. Proc. Natl. Acad. Sci. U.S.A. 2015, 112, +2367–2371. +(29) Chen, P.; Pai, W. W.; Chan, Y.-H.; Takayama, A.; Xu, C.-Z.; Karn, A.; Hasegawa, +S.; Chou, M. Y.; Mo, S.-K.; Fedorov, A.-V.; Chiang, T.-C. Emergence of Charge +Density Waves and a Pseudogap in Single-Layer TiTe2. Nat. Commun. 2017, 8, +516. +(30) Mallet, P.; Zimmermann, K. M.; Chevalier, Ph.; Marcus, J.; Veuillen, J. Y.; +Rodriguez, J. M. G. Contrast Reversal of the Charge Density Wave STM Image +in Purple Potassium Molybdenum Bronze K0.9Mo6O17. Phys. Rev. B 1999, 60, +2122–2126. +(31) Stoltz, D.; Bielmann, M.; Bovet, M.; Schlapbach, L.; Berger, H. Tunneling +Evidence for Spatial Location of the Charge-Density-Wave Induced Band +Splitting in 1T−TaSe2. Phys. Rev. B 2007, 76, 073410. +(32) Spera, M.; Scarfato, A.; Pásztor, Á.; Giannini, E.; Bowler, D. R.; Renner, Ch. +Insight into the Charge Density Wave Gap from Contrast Inversion in +Topographic STM Images. Phys. Rev. Lett. 2020, 125, 267603. +(33) Liang, Z.; Hou, X.; Zhang, F.; Ma, W.; Wu, P.; Zhang, Z.; Yu, F.; Ying, J.; Jiang, +K.; Shan, L.; Wang, Z.; Chen, X. H. Three-Dimensional Charge Density Wave and +Surface-Dependent Vortex-Core States in a Kagome Superconductor CsV3Sb5. +Phys. Rev. X 2021, 11, 031026. +(34) Jolie, W.; Knispel, T.; Ehlen, N.; Nikonov, K.; Busse, C.; Grüneis, A.; Michely, T. +Charge Density Wave phase of VSe2 revisited. Phys. Rev. B 2019, 99, 115417. +(35) Hwang, J.; Jin, Y.; Zhang, C.; Zhu, T.; Kim, K.; Zhong, Y.; Lee, J.; Shen, Z.; Chen, +Y.; Ruan, W.; Ryu, H.; Hwang, C.; Lee, J.; Crommie, M. F.; Mo, S.; Shen, Z. A + +Novel √19 × √19 Superstructure in Epitaxially Grown 1T‐TaTe2. Adv. Mater. +2022, 2204579. +(36) Whangbo, M. H.; Canadell, E. Analogies between the Concepts of Molecular +Chemistry and Solid-State Physics Concerning Structural Instabilities. Electronic +Origin of the Structural Modulations in Layered Transition Metal Dichalcogenides. +J. Am. Chem. Soc. 1992, 114, 9587–9600. +(37) Wang, Q.; Zhang, W.; Wang, L.; He, K.; Ma, X.; Xue, Q. Large-Scale Uniform +Bilayer Graphene Prepared by Vacuum Graphitization of 6H-SiC(0001) +Substrates. J. Phys.: Condens. Matter 2013, 25, 095002. + + +Figure 1 + +Figure 1. Characterization of epitaxially grown ML NbTe2. (a) Crystal structure of +monolayer 1T-NbTe2. (b) Large-scale STM image of monolayer NbTe2. (Vbias = 3 V, It += 10 pA). The inset shows the height profile along the blue arrow in (b). (c) Room- +temperature atomically resolved STM image. (Vbias = 10 mV, It = 1 nA). It shows a +hexagonal lattice with a periodicity of 3.64 ± 0.04 Å. (d) RHEED image of +submonolayer NbTe2 on a BLG substrate. + + + +8.9 A +BLG +Distance (nm) +ML-NbTe2 +HN +10nm +agraphene = 2.46 A +aNbTe2 = 3.68 A +nm +NbTe2/BLGFigure 2 + +Figure 2. Control of four superstructures in ML NbTe2 by growth temperature. (a- +d) STM topographies of sub-ML NbTe2 at growth temperatures of (a) 250, (b) 300, (c) +350, and (d) 400 ℃. Each superstructure is marked in the Figure. (e) Phase diagram as +a function of growth temperature. Scanning parameters for all images: Vbias = -1 V, It = +10 pA. All images were obtained at 4.7 K. + +Tgrowrth = 250 ℃ +DV28 × V28 +Tgrowth = 300℃ +Tgrowth = 350 °℃C +Tgrowth= 400°℃C +口 +口 +D +V19X V19 +V19×19 +口 +V28 × V28 +5nm +disordered V28 × V28 +5nm +5nm +V19×V19 +disordered V28 +V28 × V28 +4x1 +4×4Figure 3 + +Figure 3. Phase transition induced by post-annealing in ML NbTe2. (a-d) STM +images of sub-ML NbTe2 with in situ vacuum annealing at (a) 250, (b) 300, (c) 350, +and (d) 400 ℃ for 1 hour. Each superstructure is marked in the Figure. Scanning +parameters for all images: Vbias = -1 V, It = 10 pA. + + + +Tannealing = 250 ℃ +Tannealing = 300 °℃ +D +V28 × V28 +disordered V28 × V28 +4x4 +10nm +10n +Tannealing = 350 ℃C +D +V28×V28 +3V19XV19 +119 +V19 +28 × V28 +/28 × V28 +10nm +10nmFigure 4 + +Figure 4. STM/S characterization of the 4 × 1 CDW phase. (a) Atomically +resolved STM image of the 4 × 1 CDW phase. (Vbias = 100 mV, It = 100 pA). (b) Line +profile across the NbTe2 atomic lattice, shown by the black arrow in (a). (c) Fast- +Fourier transform image of (a), where six 1 × 1 spots (white circles) and a pair of 4 × +1 CDW spots (green circles) can be clearly seen. (d) Typical tunneling spectrum of the +4 × 1 CDW phase. (Vbias = 50 mV, It = 200 pA). (e) Conductance maps obtained at E = +-30 mV and (f) E = +30 mV over the same region show the stripe modulations with a +clear intensity reversal. The green and blue dashed lines are visual guides. All the data +were obtained at 4.7 K. + +-30mV +O +2 nm +1 nm-1 +nm ++30mV +nnFigure 5 + +Figure 5. STM/S characterization of the 4 × 4 CDW phase. (a) Atomically resolved +STM image of the 4 × 4 CDW phase. (Vbias = -100 mV, It = 5 nA). (b) Line profile across +the black arrow in (a) shows the 4a modulation. (c) Fast-Fourier transform image of (a). +The white and cyan circles indicate the peaks associated with the Bragg points and 4a +× 4a modulation, respectively. (d) Typical large-scale tunneling spectrum of the 4 × 4 +CDW phase. (Vbias = -500 mV, It = 200 pA). Inset: magnified dI/dV spectroscopy +exhibiting a suppression of the DOS near the Fermi level. (Vbias = 50 mV, It = 200 pA). +(e) Conductance maps of the same area obtained at E = +150 mV and (f) E = -150 mV. +A clear charge modulation with contrast inversion can be clearly seen in these maps. +The cyan dashed lines are visual guides. + ++150mV +1 nm-1 +1nm +1 nm +150mV +SupressionofDOS +1nm \ No newline at end of file diff --git a/UdE3T4oBgHgl3EQfagqD/content/tmp_files/load_file.txt b/UdE3T4oBgHgl3EQfagqD/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..c5250b9295996aa1c8524bf5a819d463e71ea1ea --- /dev/null +++ b/UdE3T4oBgHgl3EQfagqD/content/tmp_files/load_file.txt @@ -0,0 +1,1145 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf,len=1144 +page_content='Realization of multiple charge density waves in NbTe2 at the monolayer limit Yusong Bai1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zemin Pan1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Jinghao Deng1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Xiaoyu Lin1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Tao Jian1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chao Zhu1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Da Huo1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhengbo Cheng1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ping Cui3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhenyu Zhang3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Qiang Zou2*,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chendong Zhang1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 4* 1School of Physics and Technology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wuhan University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wuhan 430072,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' China 2Department of Physics and Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' West Virginia University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' WV 26506,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' USA 3International Center for Quantum Design of Functional Materials (ICQD),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Hefei National Laboratory for Physical Sciences at the Microscale,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' University of Science and Technology of China,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Hefei 230026,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' China 4Wuhan Institute of Quantum Technology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wuhan 430206,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' China Correspondence and requests for materials should be addressed to: cdzhang@whu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='cn (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' ), qzou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='iphy@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='com (Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=') Abstract: Layered transition-metal dichalcogenides (TMDCs) down to the monolayer (ML) limit provide a fertile platform for exploring charge-density waves (CDWs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Though bulk NbTe2 is known to harbor a single axis 3 × 1 CDW coexisting with non-trivial quantum properties, the scenario in the ML limit is still experimentally unknown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' In this study, we unveil the richness of the CDW phases in ML NbTe2, where not only the theoretically predicted 4 × 4 and 4 × 1 phases, but also two unexpected √𝟐𝟖 × √𝟐𝟖 and √𝟏𝟗 × √𝟏𝟗 phases, can be realized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' For such a complex CDW system, we establish an exhaustive growth phase diagram via systematic efforts in the material synthesis and scanning tunneling microscope characterization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Moreover, we report that the energetically stable phase is the larger scale order (√𝟏𝟗 × √𝟏𝟗), which is surprisingly in contradiction to the prior prediction (4 × 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' These findings are confirmed using two different kinetic pathways, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=', direct growth at proper growth temperatures (T), and low-T growth followed by high-T annealing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Our results provide a comprehensive diagram of the “zoo” of CDW orders in ML 1T-NbTe2 for the first time and offer a new material platform for studying novel quantum phases in the 2D limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' KEYWORDS: charge-density waves, monolayer 1T-NbTe2, molecular beam epitaxy, scanning tunneling microscopy and spectroscopy Charge density waves (CDWs), a collective electronic phenomenon with atomic- scale periodic modulation in terms of both lattice and charge degrees of freedom,1 have been discovered in many layered transition-metal dichalcogenides (TMDCs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='2,3 On reducing dimensionality, the electron–electron and electron–phonon interactions can be markedly enhanced due to the quenched screening,2-4 particularly in the monolayer (ML) limit, where interactions in the vertical direction were further removed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' As a consequence, exotic quantum phenomena associated with the new CDW orders emerge in such a ML system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' For instance, ML VSe2 shows a new symmetry-broken √7 × √3 charge order with an enhanced CDW transition temperature, in contrast to the 4 × 4 CDW in bulk VSe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='5,6 The group V dichalcogenides MX2 (M = V, Nb, Ta;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' X = S, Se, Te) are one of the best- known material families to harbor assorted 2D CDW orders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='7-12 Unlike other relatives in this family, niobium ditelluride (NbTe2) has been less explored in terms of both experiment and theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Previous experiments with bulk NbTe2 have shown the coexistence of a 3 × 1 CDW order and superconductivity,13-16 as well as anomalous magnetoresistance behaviors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='16,17 Owing to the shorter Te-Te spacing between layers, the interlayer interaction in Te-based TMDCs is believed to be stronger than that in Se/S compounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='18 Therefore, the telluride layered materials could show dramatic thickness- dependent structural, electronic, magnetic, and topological phase transitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='19–22 Recently, pioneering theoretical effort has predicted that the ML NbTe2 may exhibit 4 × 4, 4 × 1, 3 × 3, and 3 × 1 CDW orders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The 4 × 4 phase is the most stable among them, though the differences in their formation energies are fairly small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='23 However, despite successes in materials fabrication,24,25 the charge density modulations in NbTe2 at the ML limit remain experimentally unexplored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Herein, we report the successful epitaxial growth of ML NbTe2 on a bilayer graphene/SiC substrate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Using scanning tunneling microscopy/spectroscopy (STM/S), the triaxial 4 × 4 and uniaxial 4 × 1 CDW, which are absent in the bulk NbTe2, are found in the ML limit agreeing with the calculations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='23 For both phases, a correlation gap with remarkable magnitude (or an asymmetric line shape in the vicinity of Fermi level) and the intensity reversal of local density-of-state (DOS) maps at opposite biases are observed, which strongly support their CDW origin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Moreover, we discover unexpected triaxial CDW orders with much larger periodicities (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=', √28 × √28 and √19 × √19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Via systematic efforts in the synthesis and the STM characterization, we establish a comprehensive growth diagram for such a complex CDW system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Confirmed by using two different kinetic pathways, we conclude that the theoretically unforeseen incommensurate CDW orders are generally more thermally favored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The pure phase of √19 × √19 CDW is attained at substantially high growth (or annealing) temperature (> 400 ℃), manifesting itself as the most energetically stable CDW in ML NbTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The successful controlled growth of multiple types of CDW orders in ML 1T-NbTe2 provides a prospective material system for exploring novel quantum phases in the 2D limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Results and discussion ML NbTe2 is in a high-symmetry octahedral (1T) structure at room temperature,24,25 where the hexagonally arranged Nb atoms are sandwiched by two layers of Te atoms in an octahedral coordination, as sketched in Figure 1a (top view in the upper panel and side view in the lower panel).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Using molecular beam epitaxy (MBE), we successfully grew ML NbTe2 on a bilayer graphene (BLG)-terminated 6H-SiC(0001) substrate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Figure 1b shows a typical large-scale STM image of sub-ML NbTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The apparent height of the ML NbTe2 film was 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='9 Å, as shown in the inset of Figure 1b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The room- temperature atomically resolved STM image (Figure 1c) clearly shows a well-ordered hexagonal lattice with a lattice constant of a = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='64 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='04 Å.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' In situ reflection high- energy electron diffraction (RHEED) patterns were measured to monitor the growth process of NbTe2 on BLG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' After 1 hour of growth (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='9 ML coverage), two sets of strikes appeared (Figure 1d), which suggests that NbTe2 has its own lattice structure rather than taking on the lattice structure of BLG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Using BLG as a reference, the lattice constant of ML NbTe2 was estimated to be a = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='68 Å, which is consistent with both our STM results (Figure 1c) and the reported value of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='66 Å for NbTe2 nanoplates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='26 X-ray photoemission spectroscopy (XPS) measurements were performed to characterize the formation of NbTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' As shown in Figure S1, the Nb 3d5/2 (201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='9 eV) and Nb 3d3/2 (204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='7 eV) peaks from Nb4+, as well as the Te 3d5/2 (573.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='2 eV) and Te 3d3/2 (581.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='4 eV) peaks from Te2−, were detected, which is consistent with previous work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='27 The combined STM, RHEED, and XPS characterization results confirm the successful epitaxial growth of a high-quality ML NbTe2 film on a BLG-terminated 6H-SiC(0001) substrate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' We fabricated ML NbTe2 with a series of growth temperatures (Tgrowth) and characterized those samples at 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='7 K in an MBE-STM integrated system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Figure 2a shows a 30 × 30 nm2 STM image of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='8 ML NbTe2 with Tgrowth = 250 ℃ that displays the coexistence of three superstructures: (1) a well-ordered hexagonal superstructure with a periodicity of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='45 nm (marked by the solid cyan rhombus and labeled as a 4 × 4 superstructure);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (2) a 1D striped superstructure with a periodicity of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='28 nm along the direction perpendicular to the stripes (labeled as a 4 × 1 superstructure);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' and (3) a disordered hexagonal superstructure that exhibits a local triaxial period but without long-range order (marked by the dashed blue rhombus and labeled as a disordered √28 × √28 superstructure).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Increasing the growth temperature to 300 ℃ resulted in more 4 × 1 phase and ordered √28 × √28 phase with a periodicity of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='91 nm, but the 4 × 4 phase disappeared (Figure 2b and Figure S2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Interestingly, the 4 × 1 phase was not found after the Tgrowth was increased to 350 ℃, while the √28 × √28 phase continued to increase and coexisted with a new hexagonal superstructure with a periodicity of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='58 nm (marked by the solid red rhombus and labeled as a √19 × √19 superstructure in Figure 2c and Figure S2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Further increasing the substrate temperature (400-450 ℃) yielded a pure √19 × √19 phase (Figure 2d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' We summarized the evolution of the four superstructures in ML NbTe2 as a function of growth temperature in Figure 2e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The surface morphology evolution is shown to be driven by Tgrowth, indicating that the 4 × 4, 4 × 1 and √28 × √28 phases are metastable, while the √19 × √19 phase is energy stable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' To examine the phase stability of those superstructures, we carried out a post- annealing process on the low-temperature grown ML NbTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Figure 3a shows an STM image of ML NbTe2 grown and annealed at 250 ℃, which is in good agreement with the results in Figure 2a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Different from the direct growth case, the 4 × 1 superstructure was not observed when we fully annealed the sample to 300 ℃ (Figure 3b, more results are seen in Figure S3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' This suggests the Te-rich conditions may be essential to prepare the 4 × 1 phase (by lowering the formation enthalpy).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Further increasing the annealing temperature to 350 ℃ led to a higher coverage of the √28 × √28 superstructure and an appearance of the √19 × √19 superstructure (Figure 3c), same as Figure 2c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' A pure √19 × √19 phase was generated after the film was annealed over 400 ℃ (Figure 3d), similar to the results of the direct growth cases with growth temperatures from 400 to 450 ℃ (Figure 2d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The evolution of those superstructures upon post-annealing confirms the instability of the 4 × 4, 4 × 1 and √28 × √28 phases, which was further verified by another post-annealing process on the sample grown at 300 ℃ (see Figure S3 for details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' In ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 23, first-principles calculations suggested the appearance of 4 × 4, 4 × 1, 3 × 3, and 3 × 1 CDW orders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Our experiments realized the predicted 4 × 4 and 4 × 1 superstructures, but not the 3 × 3 and 3 × 1 orders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' To validate their CDW origins, we performed further STM/S investigations on those two phases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Figure 4a shows an atomic-resolved STM image of the 4 × 1 superstructure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Remarkably, a 4a periodicity (~1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='45 nm) along the direction of the atomic lattice can be distinguished (Figure 4b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' This 4a modulation is confirmed by the two additional peaks circled in green in the fast-Fourier transform (FFT) image (Figure 4c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Moreover, we noticed that the distortion of the topmost Te atoms was significantly large (see Figure 4a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' To give a more quantitative description of the lattice distortion, we labeled the four atoms in the 4a period as Te1, Te2, Te3, and Te4 (marked in Figure 4b) and summarized the distance between adjacent Te atoms in Table S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' It was found that the distance between Te1-Te2 and Te4-Te1 (Te2-Te3 and Te3-Te4) was longer (shorter) than the Te-Te distance of an ideal octahedral structure, and the variation in the Te-Te bond length was up to 20%, which is much larger than the 1-7% distortion in conventional ML TMDs within the CDW order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='3,28 We further examined the electronic structures of the 4 × 1 superstructure by STS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The large-scale tunneling spectra show a suppression of DOS around the Fermi level and display spatial modulation (homogeneity) in the direction perpendicular (parallel) to the stripes (see Figure S4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The magnified averaged dI/dV spectroscopy is shown in Figure 4d, where a soft gap of ~20 meV at the Fermi level was observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' This gap size is comparable to that of other typical CDW TMDs at the ML limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='9,11,29 Figure 4e and 4f show the dI/dV maps of the same area obtained at -30 mV and +30 mV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The stripe modulations with an apparent intensity reversal are distinguished (see Figure S5 for more dI/dV maps).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Generally, a spatial phase flip in the conductance maps at opposite energies near the Fermi level is a hallmark of the classic Peierls CDW, for which the intensity of the charge accumulation region is enhanced under negative bias, whereas the charge depletion region shows enhanced intensity under positive bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Hence, the contrast reversal between the filled and empty states in the 4 × 1 phase strongly suggests that it originates from a CDW order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='30–32 Thus, the 4a periodical modulation and atomic lattice distortion, the gap structure around the Fermi level, and the spatial phase flip in the conductance maps collectively support a CDW nature of the 4 × 1 superstructure Similar CDW behaviors are also observed for the 4 × 4 superstructure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Figure 5a shows an STM image with atomic resolution, which captures a 1 × 1 hexagonal unit (marked in white) and 4 × 4 reconstructed patterns (marked in cyan).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The line profile along the black arrow in Figure 5a displays 4a modulation, as shown in Figure 5b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The FFT image is shown in Figure 5c, displaying alignment between the 1 × 1 lattice peaks (marked in white circles) and the 4 × 4 spots (marked in cyan circles).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' In Figure 5d, the tunneling spectrum also shows a suppression of DOS near the Fermi level and the magnified dI/dV curve exhibits a particle-hole asymmetric V-shape gap (inset of Figure 5d and more spectroscopic results are seen in Figure S6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Such a spectroscopic feature (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' lack of sharp edge) is also commonly seen in other CDW systems like the CsV3Sb5 and VSe2, presumably due to the partially gapped Fermi surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='33,34 Moreover, the dI/dV maps of the same area measured at opposite biases exhibit intensity reversal as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' As indicated by the dashed cyan lines superimposed on Figure 5e and 5f, regions with the maximum charge intensity at -150 mV (Figure 5f) turn to the lowest charge intensity at +150 mV (Figure 5e).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' These results thus support the CDW nature of the 4 × 4 superstructure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Our experiments not only confirm the 4 × 1 and 4 × 4 predictions, but also reveal the much larger-scale superstructures with irrational periodicities (namely, the √28 × √28 and √19 × √19 ) that are more stable, including the most energetically stable phase (√19 × √19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Noticed that the √19 × √19 structure was recently observed in the ML 1T-TaTe2, showing nearly identical STM morphology features with ours, and has been unambiguously confirmed as an enlarged David-star CDW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='13,35 The √28 × √28 structure was also found to be a CDW order according to our private communications with Zeng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' G et, al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Given the much larger length scales in these two phases, more defective features will occur naturally within the periodic unit cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Thus, more efforts are needed to fully optimize their atomic structures and electronic properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' It is a demanding task to be carried out in future, but beyond the present scope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Despite the discrepancy in experiments and calculations,23 it is for certain that the ML-NbTe2 is endowed with rich and diversified electron/phonon correlation modes, which are greatly different from the bulk case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='13,14 In bulk NbTe2, the interlayer Te-Te interaction is known to be rather strong which can even result in additional intralayer chalcogen- to-metal transfer (1/3e per unit) forming the trimerized Nb ions chains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='18,36 When the thickness is down to a ML limit, the absent interlayer interaction and enhanced electron correlation may suppress the movement of electrons, leading to the emergence of new charge orders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Conclusion In conclusion, we reported the MBE growth and STM/S characterization of NbTe2 thin films down to the ML limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Multiple CDW orders are found, which are entirely different from the known bulk case and deviate from the recent theoretical predictions as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='13,14,23 In particular, we first confirmed the 4 × 4 and 4 × 1 CDWs predictions by thorough spectroscopic measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Moreover, we revealed unexpected CDW orders with larger length scales, including the √28 × √28 and the √19 × √19 that is, surprisingly, the stable phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Precise control of the complex of multiple CDW phases can be achieved owing to the establishment of a comprehensive growth phase diagram of such an uncharged monolayer TMDC material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Our findings provide a promising platform for exploring novel properties associated with emerging CDW orders, such as nontrivial topology and magnetism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Methods Growth of the ML NbTe2 films.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Sample growth was carried out using a home-built MBE system with a base pressure of ~1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='2 × 10-10 Torr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The 6H-SiC(0001) wafer was first degassed at 650 °C for several hours and then flashed to 1300 °C for 45 cycles to obtain the bilayer graphene-terminated surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='37 High-purity Nb (99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='5%) and Te (99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='999%) were evaporated from an electron-beam evaporator and a standard Knudsen cell, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The flux ratio between Nb and Te was ~ 1:20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The substrate was heated by means of direct current and calibrated with an infrared spectrometer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The growth process was monitored by in situ RHEED operated at 20 kV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' For ex situ STM/S measurements, a vacuum vessel with a base pressure of ~ 5 × 10-10 Torr was used to protect the sample from oxidation during the transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Scanning tunneling microscopy and spectroscopy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The STM/S measurements were carried out using a commercial Unisoku 1300 LT-STM system at 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='8 K (base pressure: < 1 × 10-10 Torr).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Electrochemically etched tungsten tips were cleaned in situ with electron-beam bombardment and calibrated on a clean Cu(111) surface before all measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The dI/dV spectra were obtained at a constant tip-sample distance by using a standard lock-in technique with a modulation voltage of 973 Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' X-ray photoelectron spectroscopy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' XPS spectra were acquired using a Thermo Fisher ESCALAB 250Xi instrument employing a monochromatic Mg K Alpha (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='254 keV) X-ray source in an ultrahigh vacuum atmosphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The binding energies were calibrated using the C(1s) carbon peak (284.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='8 eV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Acknowledgements This work was supported by the National Key R&D Program of China (2018FYA0305800 and 2018YFA0703700), the National Natural Science Foundation of China (11974012 and 12134011), and the Strategic Priority Research Program of Chinese Academy of Sciences (XDB30000000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' We also thank Dr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Yong Liu from Wuhan University for the XPS measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Electronic Supplementary Material: Supplementary material (including XPS results, STM images of the √28 × √28 and √19 × √19 CDW orders, large-scale tunneling spectra of the 4 × 1 and 4 × 4 phases, dI/dV maps of the 4 × 1 phase) is available in the online version of this article at http://XXX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' References (1) Grüner, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The Dynamics of Charge-Density Waves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Mod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 1988, 60, 1129–1181.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (2) Wilson, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Di Salvo, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Mahajan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Charge-Density Waves and Superlattices in the Metallic Layered Transition Metal Dichalcogenides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Advances in Physics 2001, 50, 1171–1248.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (3) Rossnagel, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' On the Origin of Charge-Density Waves in Select Layered Transition-Metal Dichalcogenides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' : Condens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Matter 2011, 23, 213001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (4) Xu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Yang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Song, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Yang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Liu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Huang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Sun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Liu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Topical Review: Recent Progress of Charge Density Waves in 2D Transition Metal Dichalcogenide-Based Heterojunctions and Their Applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Nanotechnology 2021, 32, 492001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (5) Chen, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Pai, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Madhavan, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chou, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Mo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Fedorov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chiang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Unique Gap Structure and Symmetry of the Charge Density Wave in Single-Layer VSe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2018, 121, 196402.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (6) Duvjir, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Choi, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Jang, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ulstrup, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Kang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Thi Ly, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Kim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Choi, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Jozwiak, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Bostwick, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Rotenberg, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Park, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Sankar, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Kim, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='- S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Kim, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Emergence of a Metal–Insulator Transition and High- Temperature Charge-Density Waves in VSe2 at the Monolayer Limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Nano Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2018, 18, 5432–5438.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (7) van Efferen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Berges, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Hall, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' van Loon, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Kraus, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Schobert, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wekking, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Huttmann, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Plaar, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Rothenbach, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ollefs, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Arruda, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Brookes, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Schönhoff, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Kummer, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wende, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wehling, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Michely, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' A Full Gap above the Fermi Level: The Charge Density Wave of Monolayer VS2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2021, 12, 6837.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (8) Liu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Yao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhong, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Multimorphism and Gap Opening of Charge-Density-Wave Phases in Monolayer VTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Nano Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2020, 13, 1733–1738.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (9) Ugeda, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Bradley, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Onishi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ruan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ojeda- Aristizabal, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ryu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Edmonds, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Tsai, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Riss, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Mo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Lee, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zettl, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Hussain, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Shen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Crommie, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Characterization of Collective Ground States in Single-Layer NbSe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2016, 12, 92–97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (10) Lin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Huang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhao, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Lian, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Duan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ji, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Growth of Atomically Thick Transition Metal Sulfide Filmson Graphene/6H-SiC(0001) by Molecular Beam Epitaxy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Nano Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2018, 11, 4722–4727.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (11) Ryu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Kim, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Tsai, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Tang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Jiang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Liou, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Kahn, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Jia, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Omrani, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Shim, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Hussain, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Shen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Kim, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Min, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Hwang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Crommie, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Mo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Persistent Charge-Density-Wave Order in Single-Layer TaSe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Nano Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2018, 18, 689–694.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (12) Feng, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Tan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wagner, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Mao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ke, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Charge Modulation and Structural Transformation in TaTe2 Studied by Scanning Tunneling Microscopy/Spectroscopy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2016, 109, 021901.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (13) Battaglia, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Cercellier, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Clerc, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Despont, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Garnier, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Koitzsch, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Aebi, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Berger, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Forró, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ambrosch-Draxl, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Fermi-Surface-Induced Lattice Distortion in NbTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' B 2005, 72, 195114.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (14) Feng, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Xu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhuang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Xu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Song, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Hao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Du, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Role of Charge Density Wave in Monatomic Assembly in Transition Metal Dichalcogenides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Funct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Mater.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2019, 29, 1900367.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (15) Nagata, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Abe, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ebisu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ishihara, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Tsutsumi, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Superconductivity in the Metallic Layered Compound NbTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Journal of Physics and Chemistry of Solids 1993, 54, 895–899.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (16) Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Luo, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Hu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Guo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Lin, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ge, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Xing, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Gao, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Sun, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Superconductivity and Fermi Surface Anisotropy in Transition Metal Dichalcogenide NbTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chinese Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2019, 36, 057402.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (17) Chen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Fan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Guo, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Quantum Linear Magnetoresistance in NbTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Solid State Communications 2018, 275, 16–20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (18) Canadell, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Jobic, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Brec, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Rouxel, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Whangbo, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Journal of Solid State Chemistry 1992, 99, 189-199.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (19) Hwang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Kim, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Herbig, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Kim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Kim, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhong, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Salah, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' El-Desoky, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Hwang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Shen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Crommie, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Mo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='- K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Large-Gap Insulating Dimer Ground State in Monolayer IrTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2022, 13, 906.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (20) Cui, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Li, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhou, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' He, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Huang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Yi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Fan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ji, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Jing, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Qu, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Cheng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Yang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Lu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Suenaga, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Law, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Lin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Liu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Transport Evidence of Asymmetric Spin–Orbit Coupling in Few-Layer Superconducting 1Td-MoTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2019, 10, 2044.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (21) Wen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Xia, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhai, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhai, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Shen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' He, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Cheng, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Yin, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Yao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Getaye Sendeku, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ye, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Liu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Jiang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Shan, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Long, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' He, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Tunable Room-Temperature Ferromagnetism in Two-Dimensional Cr2Te3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Nano Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2020, 20, 3130–3139.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (22) Zhuo, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Lei, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Sun, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Cui, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ying, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Xiang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Thickness-Dependent Electronic Structure in Layered ZrTe5 down to the Two-Dimensional Limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' B 2022, 106, 085428.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (23) Zhang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zou, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ren, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Si, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Duan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Realization of Coexisting Charge Density Wave and Quantum Spin/Anomalous Hall State in Monolayer NbTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Funct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Mater.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2022, 32, 2111675.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (24) Zhou, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Lin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Huang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Xia, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Xie, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Yu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Lei, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Liu, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Fu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zeng, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Hsu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Yang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Lu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Yu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Shen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Lin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Yakobson, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Liu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Suenaga, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Liu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' A Library of Atomically Thin Metal Chalcogenides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Nature 2018, 556, 355–359.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (25) Wu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Tao, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Dang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Li, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhao, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ma, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Sun, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Duan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Duan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Van Der Waals Epitaxial Growth of Atomically Thin 2D Metals on Dangling‐Bond‐Free WSe2 and WS2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Funct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Mater.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2019, 29, 1806611.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (26) Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhao, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chen, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Li, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Xia, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Guo, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Luo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ma, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Sun, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Duan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Duan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Synthesis of Ultrathin Metallic MTe2 (M = V, Nb, Ta) Single-Crystalline Nanoplates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Mater.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2018, 30, 1801043.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (27) Sheraz, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Mehmood, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Çiçek, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ergün, İ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Rasouli, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Durgun, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Kasırga, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' High Elasticity and Strength of Ultra-Thin Metallic Transition Metal Dichalcogenides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Nanoscale Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2021, 3, 3894–3899.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (28) Zhu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Cao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Plummer, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Guo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Classification of Charge Density Waves Based on Their Nature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Natl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Acad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2015, 112, 2367–2371.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (29) Chen, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Pai, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Takayama, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Xu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Karn, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Hasegawa, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chou, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Mo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Fedorov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chiang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Emergence of Charge Density Waves and a Pseudogap in Single-Layer TiTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2017, 8, 516.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (30) Mallet, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zimmermann, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chevalier, Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Marcus, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Veuillen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Rodriguez, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Contrast Reversal of the Charge Density Wave STM Image in Purple Potassium Molybdenum Bronze K0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='9Mo6O17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' B 1999, 60, 2122–2126.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (31) Stoltz, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Bielmann, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Bovet, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Schlapbach, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Berger, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Tunneling Evidence for Spatial Location of the Charge-Density-Wave Induced Band Splitting in 1T−TaSe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' B 2007, 76, 073410.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (32) Spera, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Scarfato, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Pásztor, Á.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Giannini, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Bowler, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Renner, Ch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Insight into the Charge Density Wave Gap from Contrast Inversion in Topographic STM Images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2020, 125, 267603.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (33) Liang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Hou, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ma, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Yu, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ying, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Jiang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Shan, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Three-Dimensional Charge Density Wave and Surface-Dependent Vortex-Core States in a Kagome Superconductor CsV3Sb5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' X 2021, 11, 031026.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (34) Jolie, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Knispel, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ehlen, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Nikonov, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Busse, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Grüneis, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Michely, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Charge Density Wave phase of VSe2 revisited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' B 2019, 99, 115417.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (35) Hwang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Jin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Kim, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhong, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Lee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Shen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ruan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ryu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Hwang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Lee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Crommie, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Mo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Shen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' A Novel √19 × √19 Superstructure in Epitaxially Grown 1T‐TaTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Mater.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 2022, 2204579.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (36) Whangbo, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Canadell, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Analogies between the Concepts of Molecular Chemistry and Solid-State Physics Concerning Structural Instabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Electronic Origin of the Structural Modulations in Layered Transition Metal Dichalcogenides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Am.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Chem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 1992, 114, 9587–9600.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (37) Wang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Zhang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' He, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Ma, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Xue, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Large-Scale Uniform Bilayer Graphene Prepared by Vacuum Graphitization of 6H-SiC(0001) Substrates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' : Condens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Matter 2013, 25, 095002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Figure 1 Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Characterization of epitaxially grown ML NbTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (a) Crystal structure of monolayer 1T-NbTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (b) Large-scale STM image of monolayer NbTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (Vbias = 3 V, It = 10 pA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The inset shows the height profile along the blue arrow in (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (c) Room- temperature atomically resolved STM image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (Vbias = 10 mV, It = 1 nA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' It shows a hexagonal lattice with a periodicity of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='64 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='04 Å.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (d) RHEED image of submonolayer NbTe2 on a BLG substrate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='9 A BLG Distance (nm) ML-NbTe2 HN 10nm agraphene = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='46 A aNbTe2 = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='68 A nm NbTe2/BLGFigure 2 Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Control of four superstructures in ML NbTe2 by growth temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (a- d) STM topographies of sub-ML NbTe2 at growth temperatures of (a) 250, (b) 300, (c) 350, and (d) 400 ℃.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Each superstructure is marked in the Figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (e) Phase diagram as a function of growth temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Scanning parameters for all images: Vbias = -1 V, It = 10 pA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' All images were obtained at 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='7 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Tgrowrth = 250 ℃ DV28 × V28 Tgrowth = 300℃ Tgrowth = 350 °℃C Tgrowth= 400°℃C 口 口 D V19X V19 V19×19 口 V28 × V28 5nm disordered V28 × V28 5nm 5nm V19×V19 disordered V28 V28 × V28 4x1 4×4Figure 3 Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Phase transition induced by post-annealing in ML NbTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (a-d) STM images of sub-ML NbTe2 with in situ vacuum annealing at (a) 250, (b) 300, (c) 350, and (d) 400 ℃ for 1 hour.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Each superstructure is marked in the Figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Scanning parameters for all images: Vbias = -1 V, It = 10 pA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Tannealing = 250 ℃ Tannealing = 300 °℃ D V28 × V28 disordered V28 × V28 4x4 10nm 10n Tannealing = 350 ℃C D V28×V28 3V19XV19 119 V19 28 × V28 /28 × V28 10nm 10nmFigure 4 Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' STM/S characterization of the 4 × 1 CDW phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (a) Atomically resolved STM image of the 4 × 1 CDW phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (Vbias = 100 mV, It = 100 pA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (b) Line profile across the NbTe2 atomic lattice, shown by the black arrow in (a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (c) Fast- Fourier transform image of (a), where six 1 × 1 spots (white circles) and a pair of 4 × 1 CDW spots (green circles) can be clearly seen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (d) Typical tunneling spectrum of the 4 × 1 CDW phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (Vbias = 50 mV, It = 200 pA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (e) Conductance maps obtained at E = 30 mV and (f) E = +30 mV over the same region show the stripe modulations with a clear intensity reversal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The green and blue dashed lines are visual guides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' All the data were obtained at 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content='7 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' 30mV O 2 nm 1 nm-1 nm +30mV nnFigure 5 Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' STM/S characterization of the 4 × 4 CDW phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (a) Atomically resolved STM image of the 4 × 4 CDW phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (Vbias = -100 mV, It = 5 nA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (b) Line profile across the black arrow in (a) shows the 4a modulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (c) Fast-Fourier transform image of (a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The white and cyan circles indicate the peaks associated with the Bragg points and 4a × 4a modulation, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (d) Typical large-scale tunneling spectrum of the 4 × 4 CDW phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (Vbias = -500 mV, It = 200 pA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' Inset: magnified dI/dV spectroscopy exhibiting a suppression of the DOS near the Fermi level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (Vbias = 50 mV, It = 200 pA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' (e) Conductance maps of the same area obtained at E = +150 mV and (f) E = -150 mV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' A clear charge modulation with contrast inversion can be clearly seen in these maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' The cyan dashed lines are visual guides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} +page_content=' +150mV 1 nm-1 1nm 1 nm 150mV SupressionofDOS 1nm' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UdE3T4oBgHgl3EQfagqD/content/2301.04507v1.pdf'} diff --git a/VNFKT4oBgHgl3EQfmS5u/content/2301.11857v1.pdf b/VNFKT4oBgHgl3EQfmS5u/content/2301.11857v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ab7211385876fcebf359ad195275c03a1fb853f9 --- /dev/null +++ b/VNFKT4oBgHgl3EQfmS5u/content/2301.11857v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ac471da61132b8798af18523a0af922b275e4d8a0c31710be541333d7d5d5a3 +size 1233524 diff --git a/VNFKT4oBgHgl3EQfmS5u/vector_store/index.faiss b/VNFKT4oBgHgl3EQfmS5u/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..973909570a09621008e94ff46fc50d6ac4b097ed --- /dev/null +++ b/VNFKT4oBgHgl3EQfmS5u/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc38f90f5f849c7022b11b7d8c183158ff64232f6fc7312a6260b1a70624d650 +size 2031661 diff --git a/VtE2T4oBgHgl3EQfDQZP/content/tmp_files/2301.03622v1.pdf.txt b/VtE2T4oBgHgl3EQfDQZP/content/tmp_files/2301.03622v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..6590d22a13bf185b8a50bccd287a6a02b070881b --- /dev/null +++ b/VtE2T4oBgHgl3EQfDQZP/content/tmp_files/2301.03622v1.pdf.txt @@ -0,0 +1,1243 @@ +Searching for Ultralight Dark Matter Conversion in Solar Corona using LOFAR Data +Haipeng An,1, 2, 3, 4, ∗ Xingyao Chen,5, † Shuailiang Ge,3, 6, ‡ Jia Liu,6, 3, § and Yan Luo6, ¶ +1Department of Physics, Tsinghua University, Beijing 100084, China +2Center for High Energy Physics, Tsinghua University, Beijing 100084, China +3Center for High Energy Physics, Peking University, Beijing 100871, China +4Frontier Science Center for Quantum Information, Beijing 100084, China +5School of Physics & Astronomy, University of Glasgow, Glasgow, G12 8QQ, UK +6School of Physics and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, China +Ultralight axions and dark photons are well-motivated dark matter (DM) candidates. The axion +DM and dark photon DM (DPDM) can resonantly convert into electromagnetic (EM) waves in the +solar corona when their mass is equal to the solar plasma frequency. The resultant EM waves are +mono-chromatic in the radio-frequency range with an energy equal to the DM mass, which can +be detected via radio telescopes for solar observations. We search for converted mono-chromatic +signals in the observational data of the high-sensitivity Low Frequency Array (LOFAR) telescope. +We find the upper limit on the kinetic mixing coupling between DPDM and photon can reach 10−13 +in the frequency range 30−80 MHz, which is about one order of magnitude better than the existing +constraint from the cosmic microwave background (CMB) observation. In addition, we also get the +upper limit on the axion-photon coupling in the same frequency range, which is better than the +constraints from Light-Shining-through-a-Wall experiments but does not exceed the CAST or other +astrophysical bounds. +INTRODUCTION +Due to the null results of searching for WIMPs [1–3], +the ultralight dark matter (DM) candidates, including +QCD axions, axion-like particles, and dark photons, have +been attracting more attention than ever. QCD axion +was first proposed to solve the strong CP problem [4–7], +which was later shown to be a good DM candidate [8]. +Axion-like particles arising in, e.g., string-theory mod- +els [9], coupled to SM particles in a similar way, can also +be a good DM candidate. Axion or axion-like particles +can be generated by the misalignment mechanism [10– +12] or the decay of topological objects [13, 14] in the +early Universe. Dark photon is a vector ultralight DM +candidate [15–18], which is one of the simplest exten- +sions of SM by adding a massive vector field coupled +to photon field via the kinetic mixing marginal opera- +tor [19–24]. There are several ways to produce the right +amount of DPDM in the early Universe, including the +misalignment mechanism with a non-minimal coupling +to the Ricci scalar [16, 17, 25–27], inflationary fluctua- +tions [18, 28–37], parametric resonances [38–43], or the +decay of the cosmic strings [44]. +The couplings of axion or dark photon with SM parti- +cles provide important tools in searching for these ultra- +light particles. Various types of experiments are looking +for the signals connected with photons, including halo- +scopes for Galactic halo DM [45, 46], helioscopes for ul- +tralight particles emitted from the Sun [45, 46], “Light +Shining through the wall” (LSW) methods [47, 48], and +various astrophysical bounds such as CMB spectral dis- +tortion constraints on dark photons [17, 49], gamma-ray +constraints on axion DM [50, 51] and stellar lifetime con- +straints on dark photons [52, 53]. Axions and dark pho- +tons can also be detected via WIMP detectors [54, 55]. +Moreover, many experimental results for axion DM can +be reinterpreted for dark photons. We refer the readers +to Ref. [56, 57] for an excellent summary of experimen- +tal constraints (including projected ones) for axions and +dark photons. +The other meaningful way to look for axions or dark +photons is to check the anomalous signals in various +astrophysical environments, such as neutron stars [58– +68], white dwarfs [68–71], supernovae [72, 73], quasars +and blazars [74–78], red giants and horizontal branch +stars [79], and globular clusters [80, 81]. These searches +assume that the ultralight particles are either DM or +sourced inside the astrophysical objects. As the closest +star to us, the Sun also serves as a good laboratory for +ultralight particles. Previous works set constraints on ul- +tralight particles generated inside the Sun via the stellar +cooling [52, 79, 82] and the axion decay [83]. On the other +hand, Ref. [84] proposed that DPDM can resonantly con- +vert into mono-chromatic radio-frequency EM waves in +the solar corona at a radius where the plasma frequency +equals the DPDM mass. With the presence of the solar +magnetic field, in the solar corona, axion DM can also +resonantly convert into radio waves. +In this work, we +search for such mono-chromatic radio signal in the solar +observation data of LOFAR telescope [85]. To calculate +the signal, we carry out the simulation of the propaga- +tion of EM waves inside solar corona of quiet Sun. We +then compare the signal with LOFAR data to calculate +the upper limits on DPDM model and axion DM model. +arXiv:2301.03622v1 [hep-ph] 9 Jan 2023 + +2 +MODELS +We consider both the DPDM model and the axion DM +model. For the DPDM model, the DPDM interacts with +the SM particles through the kinetic mixing, and the La- +grangian can be written as +LA′γ = −1 +4F ′ +µνF ′µν + 1 +2m2 +A′A′ +µA′µ − 1 +2ϵFµνF ′µν, +(1) +where Fµν and F ′µν are the photon and dark photon +field strength, respectively, and ϵ is the kinetic mixing. +In the axion DM model, the axion a, as a pseudo-scalar, +interacts with the SM photon via +Laγ = 1 +2∂µa∂µa − 1 +2m2 +aa2 + 1 +4gaγγaFµν ˜F µν, +(2) +where ˜F µν ≡ εµναβFαβ/2 is the dual EM field strength. +gaγγ is the coupling strength between the axion and EM +field. The last term in (2) can be simplified as −gaγγaE· +B. +In solar corona, the free electrons generate a plasma +frequency ωp, which serves as the effective mass for the +EM wave. In the non-relativistic plasma, it is determined +by the free electron density ne as, +ωp = +�4παne +me +�1/2 += +� +ne +7.3 × 108 cm−3 +�1/2 +µeV, +(3) +where α is the fine structure constant and me is the elec- +tron mass. When a dark photon A′ propagates in the +plasma, it can resonantly convert into an SM photon if +ωp ≈ mA′ [84]. In the presence of magnetic field, ax- +ions a can also resonantly convert into EM field [86]. In +the solar corona, we have a monotonically decreasing ne +from 1010 to 106 cm−3 with the height above the solar +photosphere. Therefore, the corresponding plasma fre- +quency scans from 4 × 10−6 to 4 × 10−8 eV. If the DM +mass mdm falls in this range, the resonant conversion +of DPDM or axion DM into EM waves can occur at a +specific radius rc satisfying ωp(rc) = mDM. +Since the +DM is non-relativistic in the galactic halo, the frequency +of the converted EM wave is mono-chromatic and equals +mDM/2π, with a spread of about 10−6 of its central value. +Therefore, the frequency of the EM waves is in the range +of radio frequency from 10−1000 MHz. Therefore, it can +be tested by various radio telescopes with solar physics +programs, e.g., LOFAR [85] and SKA [87]. +We assume a static and spherical solar model [88] +for the quiet Sun, and the DM wind (either DPDM or +axion DM) constantly passing through the solar atmo- +sphere. +The resonant conversion probabilities for the +DPDM model and the axion DM model in the solar +corona are [84, 86] +PA′→γ(vrc) = 2 +3 × πϵ2mA′v−1 +rc +����� +∂ ln ω2 +p(r) +∂r +����� +−1 +r=rc +, +(4) +and +Pa→γ(vrc) = π g2 +aγγ |BT |2 +ma +v−1 +rc +����� +∂ ln ω2 +p(r) +∂r +����� +−1 +r=rc +, +(5) +respectively, where vrc is the radial velocity at the res- +onant layer, and BT is the magnetic field transverse to +the direction of the axion propagation. The probabilities +in the two cases are related via +� +2 +3ϵm2 +A′ ⇔ gaγγ |BT | ma. +(6) +The prefactor 2/3 in (4) is because the longitudinal mode +of massive photons cannot propagate out of plasma. +Whereas in the axion DM case, the converted photons +are all in transverse modes with polarization in parallel +to BT [70, 86]. +The Sun has a dipole-like magnetic field but suffers +from large fluctuations [89, 90]. The global map of the +magnetic field in solar corona obtained using the tech- +nique of the Coronal Multi-channel Polarimeter shows +that the magnetic field strength is about 1-4 Gauss in the +corona at the distance of 1.05-1.35 R⊙ [91]. However, the +map also shows strong magnetic field inhomogeneity de- +pending on location, altitude, and time, which makes it +difficult to accurately calculate the axion-photon conver- +sion. Consequently, we take a compromise by using the +conservative value of 1 Gauss for |BT | in the calculation +of the axion conversion in (6). +With the conversion probability, the radiation power P +per solid angle dΩ at the conversion layer can be derived +as +dP +dΩ = +� +dv0fDM(v0)PX→γ(v0)ρDMv(rc)r2 +c, +(7) +where the DM density is ρDM = 0.3 GeV cm−3 [92, 93] +and v0, the initial DM velocity follows a Maxwellian +distribution fDM(v0) with the most probable velocity +of 235 km/s [94, 95]. +v(rc) = +� +v2 +0 + 2GNM⊙/rc is +the DM velocity at the conversion layer including the +gravitational effect of the Sun, with GN the gravitational +constant and M⊙ the solar mass. +PROPAGATION +The converted EM waves propagating through the +corona will experience interactions with the plasma, in- +cluding both absorption and scattering. +The absorp- +tion of the converted photons is mainly via the inverse +bremsstrahlung process. +Due to refraction, the con- +verted EM wave would propagate along the radial di- +rection soon after it flies out of the resonant region if +there were no scatterings between the EM wave and the + +3 +plasma [84]. However, the scattering effect in the inho- +mogeneous plasma will randomize the direction of EM +waves, which will lead to a broadened angular distribu- +tion of outgoing EM waves [96, 97]. +The LOFAR ob- +servation using the interferometry mode has a very nice +angular resolution. The field of view (FOV) of each LO- +FAR beam is significantly smaller than the total angular +span of the Sun. Thus, we expect the scattering effect to +suppress the signal observed by the LOFAR detector. +With the absorption and scattering effects taken into +account, the spectral flux density received by LOFAR can +be written as +Ssig = 1 +B +1 +d2 +dP +dΩPsur(f)β(f) +(8) +where d = 1 AU is the distance from the Earth to the +Sun and B is the bandwidth, which is the larger one +between the DM dispersion bandwidth Bsig and the tele- +scope spectral resolution Bres. In our case, Bsig < Bres, +so we have B = Bres. In (8), Psur is the surviving prob- +ability. For each converted photon, Psur also depends on +the path it travels, and therefore numerical simulation is +needed to calculate Psur. The β factor in (8) parameter- +ize the scattering effect and is defined as +β(f) = d2 +R2 +S +� +beam +g(θ1, φ1) +r2 +dS , +(9) +where g(θ1, φ1) is the angular distribution function of +the scattered photons at the last scattering radius RS, +beyond which the scattering process can be neglected. +RS(f) is determined by numerical simulation, which is +about 5 to 7R⊙, slightly depending on the photon fre- +quency. The integration in (9) is over the last scattering +surface, and r is the distance from the integrated sur- +face element dS to LOFAR. The detailed derivation and +computation of Eq. (8) require some involved but basic +geometric analysis and are presented in appendix. +We use the Monte Carlo ray-tracing method devel- +oped in Ref. [97] to simulate the propagation of the +converted photon in the corona plasma, including both +the absorption and scattering effects. +We describe +the radio-wave scattering process using the Fokker- +Planck and Langevin equations based on the Hamilton +equations for photons [97–99]. +In our simulation, we +follow Ref. [96] to use the Kolmogorov spectrum for +the electron density fluctuation for the quiet Sun with +δne/ne = 0.1. Moreover, we consider the magnitude of +anisotropic density fluctuation as αanis = 0.1 [96], αanis +is the anisotropy parameter defined as the ratio between +the perpendicular and parallel correlation lengths [97]. +Then for each frequency, we calculate Psur(f) and β(f), +and simulation results are presented in Fig. 1. We see +that the absorption effect is more substantial at higher +frequencies. +The same is true for the smearing effect, +mainly because the FOV of LOFAR decreases with +increasing frequency, which makes the smearing effect +30 +40 +50 +60 +70 +80 +0.1 +0.2 +0.3 +0.4 +0.5 +0.02 +0.04 +0.06 +0.08 +0.1 +f [MHz] +Psur +β +Figure 1. The survival probability Psur (cyan) and the smear- +ing factor β (purple) as functions of the photon frequency. +more apparent at higher frequencies. +LOFAR DATA ANALYSIS +LOFAR is an advanced radio interferometer with high +resolution and sensitivity. We use the observation data +of the beam-formed mode [85], which combines the 24 +LOFAR core stations in the Netherlands to form 127 +tied-array beams. This mode increases the sensitivity re- +markably. However, the 3.5 km baseline of LOFAR core +size also restricts the FOV to only ∼ 5′ at 32 MHz [85]. +The observation data we use is the spectral flux density +calibrated in solar flux unit (sfu) within the frequency +range of 30-80 MHz. Since some beams are outside the +solar surface, only beams with fluxes larger than half of +the maximum beam flux are selected. We have data from +three different observation periods with the same obser- +vation duration of 17 minutes, which were carried out on +April 25, 2015, July 3, 2015, and September 3, 2015. The +bandwidth is Bres = 97 kHz. +We take an average of the data in the selected beams. +The averaged data distributes along 516 frequency bins, +each consisting of 6000 time bins. +We clean the data +using the following procedure to eliminate the burst-like +noises. First, we divide the 6000 time series into 150 in- +tervals, with each interval containing 40 time bins enough +to reflect the statistics. We choose the interval with the +lowest mean as the reference interval. +Then we com- +pare the mean µt and standard deviation σt of each in- +terval with that of the reference interval. We only col- +lect the intervals that satisfy the conditions µt[test] < +µt[ref] + 2σt[ref] and σt[test] < 2σt[ref]. +This data- +cleaning process only removes transient noises but not +the time-independent ultralight DM signal. +After data cleaning, for each frequency bin, i, we can +get the average value ¯Oi and the standard deviation σ ¯ +Oi +as the statistical uncertainty of the time series. We pa- + +4 +rameterize the background locally by fitting each fre- +quency bin and its adjacent k bins with a polynomial +function of degree n. In practice, we choose k = 10 and +n = 3. Then, we use the least square method to evaluate +the deviation of data to the background fit. +The fit- +ting deviation is taken to be the systematic uncertainty +σsys +i +. The total uncertainty is in the quadrature form, +σ2 +i = σ2 +¯ +Oi + (σsys +i +)2. It turns out that σsys +i +always domi- +nates in σi. +We adopt the log-likelihood ratio test method [100] to +set upper limits on DPDM/axion DM parameter space. +We build the likelihood function for the frequency bin i0 +in the Gaussian form [101] +L(S, a) = +i0+5 +� +i=i0−5 +1 +σi +√ +2π exp +� +−1 +2 +�B(a, fi) + Sδii0 − ¯Oi +σi +�2� +, +(10) +where B(a, fi) is the polynomial function for background +fitting, the coefficients a = (a1, a2, a3) are treated as nui- +sance parameters, and S is the assumed DPDM/axion +DM signal at bin i0. Then we can build the following +test statistic [100, 101] +qS = +� +−2 ln +� +L(S,˜a) +L( ˆS,ˆa) +� +, +ˆS ≤ S +0, +ˆS > S +. +(11) +In the denominator, the likelihood L gets maximized at +a = ˆa and S = ˆS; in the numerator, L gets maximized +at a = ˜a for a specified S. The test statistic qS follows +the half-χ2 distribution: +h(qS|S) = 1 +2δ(qS) + 1 +2 +1 +√ +2π +1 +√qS +e−qS/2, +(12) +the +cumulative +distribution +function +of +which +is +H(qS|S) = 1 +2 + 1 +2erf( +� +qS/2). Then, we can define the +following criterion [100, 101]: +pS = 1 − erf( +� +qS/2) +1 − erf( +� +q0/2) +, +(13) +which measures how far the assumed signal is away from +the null result S = 0. We set pS = 0.05 to get the 95% +confidence level (C.L.) upper limit, Slim. +The results +of Slim as functions of frequency are shown in Fig. 2, +in which the datasets we use are from three observation +periods shown in different colors. +Note that there are +high peaks at some frequencies (e.g., 40 MHz) in the plot. +Solar bursts may induce this at these frequencies, which +weaken the upper limits. +We calculate the 95% C.L. upper limits on ϵ for DPDM +model and on gaγγ for the axion DM model by requir- +ing Slim equal to Ssig in (8) for each dataset. +Among +the constraints from the three datasets, we choose the +Apr. 25 +Jun. 03 +Sept. 03 +30 +40 +50 +60 +70 +80 +10-25 +10-24 +10-23 +10-22 +10-21 +f [MHz] +Slim [W/m2/Hz] +Figure 2. +Model-independent 95% C.L. upper limits from +LOFAR data on a constant mono-chromatic signal. The blue, +orange, and green limits represent the observation data on +April 25, 2015, July 3, 2015, and September 3, 2015. +strongest one at each frequency bin as the final upper +limit. The limit from LOFAR data for DPDM is plot- +ted in Fig. 2, which shows that the upper limit on ϵ is +about 10−13 in the frequency range 30 − 80 MHz. It is +about one order of magnitude better than the existing +CMB constraint [17, 49], and is complementary to other +searches for DPDM with higher frequency, such as the +Dark E-field experiment [102]. +1 +2 +3 +4 +5 +6 +10-14 +10-13 +10-12 +10-11 +10-10 +30 +40 +50 +60 +70 +80 90 100 +120 +mA' [10-7eV] +ϵ × (ρA'/ρDM)0.5 +f [MHz] +LOFAR +CMB Distortions +Dark E-field +WISPDMX +Figure 3. 95% C.L. upper limit on kinetic mixing ϵ for dark +photon DM from 17 minutes observation of LOFAR data. +We also show the existing constraints from the CMB distor- +tion [17, 49] and the haloscope searches WISPDMX [103] and +Dark E-field experiments [102]. +The upper limit for DPDM can be readily trans- +lated into the upper limit for the axion DM model +using the relation in (6). +To be conservative, we take +|BT | = 1 Gauss, which is the lower value given in +Ref. [91] and the constraint on gaγγ is plotted as the +solid line in Fig. 4. In comparison, we also plot the result +as the dashed line for |BT | = 4 Gauss, which is the upper + +1 +1 +2 +3 +4 +5 +6 +10-12 +10-11 +10-10 +10-9 +10-8 +10-7 +10-6 +30 +40 +50 +60 +70 +80 90 100 +120 +ma [10-7 eV] +gaγγ [GeV-1] × (ρa/ρDM)0.5 +f [MHz] +CROWS +ALPS +OSQAR +CAST +MWDP +Globular Clusters +Pulsars +QBs +ADMX SLIC +LOFAR +Figure 4. +95% C.L. upper limit on axion-photon coupling +gaγγ from 17 minutes observation of LOFAR data. +The +solid (dashed) blue lines are for |BT | = 1(4) Gauss respec- +tively. We also show the existing constraints from various ex- +periments and astrophysical observations (the data are from +the website [57]), including Light-Shining-through-a-Wall ex- +periments: CROWS [104], ALPS [105] and OSQAR [106]; +helioscope: +CAST [107]; haloscope: +ADMX SLIC [108]; +astrophysical bounds: +magnetic white dwarf polarization +(MWDP) [71], Globular Clusters [80, 81], pulsars [64], as well +as quasars and blazars (QBs, shown in dashed gray) [76–78]. +value of the solar corona magnetic field in Ref. [91]. The +large uncertainty of the magnetic field overshadows other +statistical and systematic uncertainties. Therefore, the +zig-zag features shown in Fig. 3 become less meaningful +in the axion DM case. As a result, in Fig. 4, we average +the upper limits over every 20 frequency bins to indicate +the sensitivity of the LOFAR data on axion DM model. +We see that our result exceeds the existing constraints +from Light-Shining-through-a-Wall experiments, includ- +ing CROWS [104], ALPS [105] and OSQAR [106], but is +not as competitive as the direct search experiments such +as CAST [107] or ADMX SLIC [108] (in very narrow +bands), and the astrophysical bounds from observations +of magnetic white dwarf polarization [71], Globular +Clusters [80, 81], pulsars [64], as well as quasars and +blazars [76–78]. +SUMMARY AND OUTLOOK +When DPDM or axion DM pass across the Sun, they +can resonantly convert into EM waves in the solar corona. +We numerically simulated the converted photons propa- +gating in the plasma, including the effects of absorption +and scattering. Radio telescopes for solar observations +can detect mono-chromatic converted EM waves. +We +used the three datasets of 17-minute observation data +from LOFAR to search for such signals. We found that +this method sets a stringent limit for the kinetic mix- +ing dark photon, ϵ ∼ 10−13, in the frequency range +30 − 80 MHz, which is about one order of magnitude +stronger than the CMB constraint. +Similarly, we ob- +tain an upper limit for the axion-photon coupling gaγγ +for the axion DM model in the same frequency range. +The constraint on gaγγ is better than that from Light- +Shining-through-a-Wall experiments. +However, it does +not exceed the bounds from the CAST experiment, the +haloscope-type experiment ADMX SLIC and other astro- +physical bounds. The LOFAR data analysis in this work +shows great potential in searching for ultralight DM with +radio telescopes. With greater sensitivity, we expect fu- +ture radio programs such as the SKA telescope will have +better sensitivity on DPDM and axion DM search. Ter- +restrial radio telescopes cannot search for DPDM with +a frequency smaller than 10 MHz due to the screening +effect from the ionosphere. In this case, we may use solar +probes, such as the STEREO [109] satellite and Parker +Solar Probe [110], carrying radio spectrometers, to search +for them. +Acknowledgment– The authors would like to thank +Eduard Kontar for the helpful discussions and espe- +cially the interpretation of the data format and cali- +brations. The work of HA is supported in part by the +National Key R&D Program of China under Grant No. +2021YFC2203100 and 2017YFA0402204, the NSFC un- +der Grant No. 11975134, and the Tsinghua University +Initiative Scientific Research Program. The work of SG +is supported by the International Postdoctoral Exchange +Fellowship Program and the Boya Postdoctoral Fellow- +ship of Peking University. The work of JL is supported +by NSFC under Grant No. 12075005, 12235001, and by +Peking University under startup Grant No. 7101502458. +Appendix: The simulation of the scattering and +absorption effects +In this appendix, we derive the effective spectral flux +density (8) received by LOFAR stations. +Firstly, the Field of View (FOV) of LOFAR, or effec- +tively, the Full Width Half Maximum (FWHM) of LO- +FAR, is determined by +FWHM = α × λ +D, +(S1) +where λ is the observation wavelength, the coefficient +α = 1.02 [111], and D ≃ 3.5 km is the station diame- +ter according to Ref. [112]. Therefore, the FWHM (for +one beam) is about 10−3 rad. +We can effectively define the last scattering sphere of +radius RS, beyond which the scattering effect can be ig- +nored, so the radio waves propagate in straight lines for + +2 +r > RS. The total radiation power for dark photon sig- +nal at frequency f after conversion is dP/dΩ × 4π. So +the survived power at the last scattering sphere is +P = Psur(f)4π dP +dΩ. +(S2) +dΩ1 +dA1 +dA2 +P2 +P1 +⃗r +RS +RC +N1 +N2 +θ1 +θ2 +θ +d +Figure S1. Schematic diagram of the propagation of photons +after the last scattering. +RC denotes the conversion layer; +RS denotes the last scattering sphere. A surface element dA1 +containing a point P1 as the radiation source on the last scat- +tering sphere. +A surface element dA2 containing P2 is the +detection area on the Earth, which defines a solid angle dΩ1 +about P1 in the direction of r. θi is the angle between the +propagation vector r and the normal vector Ni of dAi. We +set the direction of N2 along the line between the centers of +the Sun and the Earth. +Considering a virtual source point P1 within a sur- +face element dA1 on the last scattering sphere (see the +schematic diagram Fig. S1), the power it radiates in the +direction r is +dP′ = P dA1 +4πR2 +S +g(θ1, φ1)dΩ1, +(S3) +where the angular distribution function g(θ1, φ1) ac- +counts for the fact that after multiple random scatter- +ing events, the radiation from the surface element is not +simply in the radial direction. g(θ1, φ1) is normalized as +1 = +� +g(θ1, φ1)dΩ1. +(S4) +The relation dΩ1 = dA2 cos θ2/r2 is useful where the co- +sine accounts for converting the receiving area dA2 to the +projected area normal to r. Then, Eq. (S3) becomes +dP′ = P dA1 +4πR2 +S +g(θ1, φ1)dA2 cos θ2 +r2 +, +(S5) +where r is the distance from the surface element to the +Earth. Meanwhile, θ2 is of oder 10−3 rad so that cos θ2 ≃ +1. +Substituting Eq. (S2) into Eq. (S5) and integrating +over the area on the last scattering sphere covered by +the beams, we arrive at the effective spectral flux den- +sity (power per unit area and unit frequency) received +by LOFAR: +Ssig = Psur +1 +B +1 +R2 +S +dP +dΩ +� +beam +g(θ1, φ1) +r2 +dA1. +(S6) +As discussed in the main text, the angular distribution +function g(θ1, φ1) can be obtained by numerical simula- +tions. The integration is performed in the spherical coor- +dinates (θ, φ) with the Solar center as the origin. There- +fore, it can be transformed into +Ssig = Psur +1 +d2 +1 +B +dP +dΩ +� +beam +g(θ1, φ1) sin θ1 +cos θ2 +dθ1dφ1. +(S7) +where d = 1 AU is the distance from the Earth to the +Sun. cos θ2 = +� +1 − R2 +S sin θ1 +2/d2 is the geometric rela- +tion. The RS dependence in cos θ2 is canceled out by the +implicit RS dependence in g(θ1, φ1, RS). For the simplest +case with no scattering, g(θ1, φ1) = δ(θ1)/(2π sin θ1), +Eq. (S7) becomes Ssig = Psur · 1/d2 · 1/B · dP/dΩ as +expected. It is worth noting that as the data is averaged +over the beams with flux larger than 50% of the maxi- +mum beam flux, the spherical surface integral is over the +area covered by these selected beams and then divided +by the number of selected beams. +∗ anhp@mail.tsinghua.edu.cn +† Xingyao.Chen@glasgow.ac.uk +‡ sge@pku.edu.cn +§ jialiu@pku.edu.cn +¶ ly23@stu.pku.edu.cn +[1] LUX Collaboration, D. S. Akerib et al., “Results from +a search for dark matter in the complete LUX +exposure,” Phys. Rev. Lett. 118 no. 2, (2017) 021303, +arXiv:1608.07648 [astro-ph.CO]. +[2] XENON Collaboration, E. Aprile et al., “Dark +Matter Search Results from a One Ton-Year Exposure +of XENON1T,” Phys. Rev. Lett. 121 no. 11, (2018) +111302, arXiv:1805.12562 [astro-ph.CO]. +[3] PandaX-4T Collaboration, Y. Meng et al., “Dark +Matter Search Results from the PandaX-4T +Commissioning Run,” Phys. Rev. Lett. 127 no. 26, +(2021) 261802, arXiv:2107.13438 [hep-ex]. +[4] R. D. Peccei and H. R. Quinn, “CP Conservation in +the Presence of Instantons,” Phys. Rev. Lett. 38 +(1977) 1440–1443. [,328(1977)]. +[5] R. D. Peccei and H. R. Quinn, “Constraints Imposed +by CP Conservation in the Presence of Instantons,” +Phys. Rev. D 16 (1977) 1791–1797. +[6] S. Weinberg, “A New Light Boson?,” Phys. Rev. Lett. +40 (1978) 223–226. +[7] F. Wilczek, “Problem of Strong P and T Invariance in +the Presence of Instantons,” Phys. Rev. Lett. 40 +(1978) 279–282. +[8] J. Ipser and P. Sikivie, “Can Galactic Halos Made of +Axions?,” Phys. Rev. Lett. 50 (1983) 925. + +3 +[9] P. Svrcek and E. Witten, “Axions In String Theory,” +JHEP 06 (2006) 051, arXiv:hep-th/0605206 +[hep-th]. +[10] J. Preskill, M. B. Wise, and F. Wilczek, “Cosmology +of the Invisible Axion,” Phys. Lett. B120 (1983) +127–132. [,URL(1982)]. +[11] L. F. Abbott and P. Sikivie, “A Cosmological Bound +on the Invisible Axion,” Phys. Lett. B120 (1983) +133–136. [,URL(1982)]. +[12] M. Dine and W. Fischler, “The Not So Harmless +Axion,” Phys. Lett. B120 (1983) 137–141. +[,URL(1982)]. +[13] A. Vilenkin and A. E. Everett, “Cosmic strings and +domain walls in models with goldstone and +pseudo-goldstone bosons,” Physical Review Letters 48 +no. 26, (1982) 1867. +[14] P. Sikivie, “Axions, domain walls, and the early +universe,” Physical Review Letters 48 no. 17, (1982) +1156. +[15] J. Redondo and M. Postma, “Massive hidden photons +as lukewarm dark matter,” JCAP 02 (2009) 005, +arXiv:0811.0326 [hep-ph]. +[16] A. E. Nelson and J. Scholtz, “Dark Light, Dark Matter +and the Misalignment Mechanism,” Phys. Rev. D84 +(2011) 103501, arXiv:1105.2812 [hep-ph]. +[17] P. Arias, D. Cadamuro, M. Goodsell, J. Jaeckel, +J. Redondo, and A. Ringwald, “WISPy Cold Dark +Matter,” JCAP 1206 (2012) 013, arXiv:1201.5902 +[hep-ph]. +[18] P. W. Graham, J. Mardon, and S. Rajendran, “Vector +Dark Matter from Inflationary Fluctuations,” Phys. +Rev. D93 no. 10, (2016) 103520, arXiv:1504.02102 +[hep-ph]. +[19] B. Holdom, “Two U(1)’s and Epsilon Charge Shifts,” +Phys. Lett. 166B (1986) 196–198. +[20] K. R. Dienes, C. F. Kolda, and J. March-Russell, +“Kinetic mixing and the supersymmetric gauge +hierarchy,” Nucl. Phys. B 492 (1997) 104–118, +arXiv:hep-ph/9610479. +[21] S. A. Abel and B. W. Schofield, “Brane anti-brane +kinetic mixing, millicharged particles and SUSY +breaking,” Nucl. Phys. B 685 (2004) 150–170, +arXiv:hep-th/0311051. +[22] S. A. Abel, M. D. Goodsell, J. Jaeckel, V. V. Khoze, +and A. Ringwald, “Kinetic Mixing of the Photon with +Hidden U(1)s in String Phenomenology,” JHEP 07 +(2008) 124, arXiv:0803.1449 [hep-ph]. +[23] S. A. Abel, J. Jaeckel, V. V. Khoze, and A. Ringwald, +“Illuminating the Hidden Sector of String Theory by +Shining Light through a Magnetic Field,” Phys. Lett. +B 666 (2008) 66–70, arXiv:hep-ph/0608248. +[24] M. Goodsell, J. Jaeckel, J. Redondo, and A. Ringwald, +“Naturally Light Hidden Photons in LARGE Volume +String Compactifications,” JHEP 11 (2009) 027, +arXiv:0909.0515 [hep-ph]. +[25] G. Alonso-´Alvarez, T. Hugle, and J. Jaeckel, +“Misalignment & Co.: (Pseudo-)scalar and vector dark +matter with curvature couplings,” arXiv:1905.09836 +[hep-ph]. +[26] K. Nakayama, “Vector Coherent Oscillation Dark +Matter,” JCAP 1910 (2019) 019, arXiv:1907.06243 +[hep-ph]. +[27] K. Nakayama, “Constraint on Vector Coherent +Oscillation Dark Matter with Kinetic Function,” +JCAP 08 (2020) 033, arXiv:2004.10036 [hep-ph]. +[28] Y. Ema, K. Nakayama, and Y. Tang, “Production of +Purely Gravitational Dark Matter: The Case of +Fermion and Vector Boson,” JHEP 07 (2019) 060, +arXiv:1903.10973 [hep-ph]. +[29] E. W. Kolb and A. J. Long, “Completely dark photons +from gravitational particle production during the +inflationary era,” JHEP 03 (2021) 283, +arXiv:2009.03828 [astro-ph.CO]. +[30] B. Salehian, M. A. Gorji, H. Firouzjahi, and +S. Mukohyama, “Vector dark matter production from +inflation with symmetry breaking,” Phys. Rev. D 103 +no. 6, (2021) 063526, arXiv:2010.04491 [hep-ph]. +[31] A. Ahmed, B. Grzadkowski, and A. Socha, +“Gravitational production of vector dark matter,” +JHEP 08 (2020) 059, arXiv:2005.01766 [hep-ph]. +[32] Y. Nakai, R. Namba, and Z. Wang, “Light Dark +Photon Dark Matter from Inflation,” JHEP 12 (2020) +170, arXiv:2004.10743 [hep-ph]. +[33] K. Nakayama and Y. Tang, “Gravitational Production +of Hidden Photon Dark Matter in Light of the +XENON1T Excess,” Phys. Lett. B 811 (2020) 135977, +arXiv:2006.13159 [hep-ph]. +[34] H. Firouzjahi, M. A. Gorji, S. Mukohyama, and +B. Salehian, “Dark photon dark matter from charged +inflaton,” JHEP 06 (2021) 050, arXiv:2011.06324 +[hep-ph]. +[35] M. Bastero-Gil, J. Santiago, L. Ubaldi, and +R. Vega-Morales, “Dark photon dark matter from a +rolling inflaton,” JCAP 02 no. 02, (2022) 015, +arXiv:2103.12145 [hep-ph]. +[36] H. Firouzjahi, M. A. Gorji, S. Mukohyama, and +A. Talebian, “Dark matter from entropy perturbations +in curved field space,” Phys. Rev. D 105 no. 4, (2022) +043501, arXiv:2110.09538 [gr-qc]. +[37] T. Sato, F. Takahashi, and M. Yamada, “Gravitational +production of dark photon dark matter with mass +generated by the Higgs mechanism,” +arXiv:2204.11896 [hep-ph]. +[38] R. T. Co, A. Pierce, Z. Zhang, and Y. Zhao, “Dark +Photon Dark Matter Produced by Axion Oscillations,” +arXiv:1810.07196 [hep-ph]. +[39] J. A. Dror, K. Harigaya, and V. Narayan, “Parametric +Resonance Production of Ultralight Vector Dark +Matter,” arXiv:1810.07195 [hep-ph]. +[40] M. Bastero-Gil, J. Santiago, L. Ubaldi, and +R. Vega-Morales, “Vector dark matter production at +the end of inflation,” arXiv:1810.07208 [hep-ph]. +[41] P. Agrawal, N. Kitajima, M. Reece, T. Sekiguchi, and +F. Takahashi, “Relic Abundance of Dark Photon Dark +Matter,” arXiv:1810.07188 [hep-ph]. +[42] R. T. Co, K. Harigaya, and A. Pierce, “Gravitational +waves and dark photon dark matter from axion +rotations,” JHEP 12 (2021) 099, arXiv:2104.02077 +[hep-ph]. +[43] K. Nakayama and W. Yin, “Hidden photon and axion +dark matter from symmetry breaking,” JHEP 10 +(2021) 026, arXiv:2105.14549 [hep-ph]. +[44] A. J. Long and L.-T. Wang, “Dark Photon Dark +Matter from a Network of Cosmic Strings,” +arXiv:1901.03312 [hep-ph]. +[45] P. Sikivie, “Experimental Tests of the Invisible +Axion,” Phys. Rev. Lett. 51 (1983) 1415–1417. + +4 +[Erratum: Phys.Rev.Lett. 52, 695 (1984)]. +[46] P. Sikivie, “Detection Rates for ’Invisible’ Axion +Searches,” Phys. Rev. D 32 (1985) 2988. [Erratum: +Phys.Rev.D 36, 974 (1987)]. +[47] L. B. Okun, “LIMITS OF ELECTRODYNAMICS: +PARAPHOTONS?,” Sov. Phys. JETP 56 (1982) 502. +[Zh. Eksp. Teor. Fiz.83,892(1982)]. +[48] K. Van Bibber, N. R. Dagdeviren, S. E. Koonin, +A. Kerman, and H. N. Nelson, “Proposed experiment +to produce and detect light pseudoscalars,” Phys. Rev. +Lett. 59 (1987) 759–762. +[49] S. D. McDermott and S. J. Witte, “Cosmological +evolution of light dark photon dark matter,” Phys. +Rev. D101 no. 6, (2020) 063030, arXiv:1911.05086 +[hep-ph]. +[50] Fermi-LAT Collaboration, M. Ajello et al., “Search +for Spectral Irregularities due to +Photon–Axionlike-Particle Oscillations with the Fermi +Large Area Telescope,” Phys. Rev. Lett. 116 no. 16, +(2016) 161101, arXiv:1603.06978 [astro-ph.HE]. +[51] M. Meyer and T. Petrushevska, “Search for +Axionlike-Particle-Induced Prompt γ-Ray Emission +from Extragalactic Core-Collapse Supernovae with the +Fermi Large Area Telescope,” Phys. Rev. Lett. 124 +no. 23, (2020) 231101, arXiv:2006.06722 +[astro-ph.HE]. [Erratum: Phys.Rev.Lett. 125, 119901 +(2020)]. +[52] H. An, M. Pospelov, and J. Pradler, “New stellar +constraints on dark photons,” Phys. Lett. B 725 +(2013) 190–195, arXiv:1302.3884 [hep-ph]. +[53] H. An, M. Pospelov, J. Pradler, and A. Ritz, “New +limits on dark photons from solar emission and keV +scale dark matter,” Phys. Rev. D 102 (2020) 115022, +arXiv:2006.13929 [hep-ph]. +[54] H. An, M. Pospelov, J. Pradler, and A. Ritz, “Direct +Detection Constraints on Dark Photon Dark Matter,” +Phys. Lett. B 747 (2015) 331–338, arXiv:1412.8378 +[hep-ph]. +[55] H. An, M. Pospelov, and J. Pradler, “Dark Matter +Detectors as Dark Photon Helioscopes,” Phys. Rev. +Lett. 111 (2013) 041302, arXiv:1304.3461 [hep-ph]. +[56] A. Caputo, A. J. Millar, C. A. J. O’Hare, and +E. Vitagliano, “Dark photon limits: A handbook,” +Phys. Rev. D 104 no. 9, (2021) 095029, +arXiv:2105.04565 [hep-ph]. +[57] C. O’Hare, “cajohare/axionlimits: Axionlimits.” +https://cajohare.github.io/AxionLimits/, July, +2020. +[58] M. S. Pshirkov and S. B. Popov, “Conversion of Dark +matter axions to photons in magnetospheres of +neutron stars,” J. Exp. Theor. Phys. 108 (2009) +384–388, arXiv:0711.1264 [astro-ph]. +[59] F. P. Huang, K. Kadota, T. Sekiguchi, and H. Tashiro, +“Radio telescope search for the resonant conversion of +cold dark matter axions from the magnetized +astrophysical sources,” Phys. Rev. D 97 no. 12, (2018) +123001, arXiv:1803.08230 [hep-ph]. +[60] A. Hook, Y. Kahn, B. R. Safdi, and Z. Sun, “Radio +Signals from Axion Dark Matter Conversion in +Neutron Star Magnetospheres,” Phys. Rev. Lett. 121 +no. 24, (2018) 241102, arXiv:1804.03145 [hep-ph]. +[61] B. R. Safdi, Z. Sun, and A. Y. Chen, “Detecting Axion +Dark Matter with Radio Lines from Neutron Star +Populations,” Phys. Rev. D 99 no. 12, (2019) 123021, +arXiv:1811.01020 [astro-ph.CO]. +[62] J.-F. Fortin and K. Sinha, “X-Ray Polarization Signals +from Magnetars with Axion-Like-Particles,” JHEP 01 +(2019) 163, arXiv:1807.10773 [hep-ph]. +[63] J.-F. Fortin and K. Sinha, “Constraining +Axion-Like-Particles with Hard X-ray Emission from +Magnetars,” JHEP 06 (2018) 048, arXiv:1804.01992 +[hep-ph]. +[64] D. Noordhuis, A. Prabhu, S. J. Witte, A. Y. Chen, +F. Cruz, and C. Weniger, “Novel Constraints on +Axions Produced in Pulsar Polar Cap Cascades,” +arXiv:2209.09917 [hep-ph]. +[65] D. K. Hong, C. S. Shin, and S. Yun, “Cooling of young +neutron stars and dark gauge bosons,” Phys. Rev. D +103 no. 12, (2021) 123031, arXiv:2012.05427 +[hep-ph]. +[66] M. D. Diamond and G. Marques-Tavares, “γ-Ray +Flashes from Dark Photons in Neutron Star Mergers,” +Phys. Rev. Lett. 128 no. 21, (2022) 211101, +arXiv:2106.03879 [hep-ph]. +[67] B.-Q. Lu and C.-W. Chiang, “Probing dark gauge +boson with observations from neutron stars,” Phys. +Rev. D 105 no. 12, (2022) 123017, arXiv:2107.07692 +[hep-ph]. +[68] E. Hardy and N. Song, “Listening for Dark Photon +Radio from the Galactic Centre,” arXiv:2212.09756 +[hep-ph]. +[69] J.-W. Wang, X.-J. Bi, R.-M. Yao, and P.-F. Yin, +“Exploring axion dark matter through radio signals +from magnetic white dwarf stars,” Phys. Rev. D 103 +no. 11, (2021) 115021, arXiv:2101.02585 [hep-ph]. +[70] C. Dessert, A. J. Long, and B. R. Safdi, “X-ray +Signatures of Axion Conversion in Magnetic White +Dwarf Stars,” Phys. Rev. Lett. 123 no. 6, (2019) +061104, arXiv:1903.05088 [hep-ph]. +[71] C. Dessert, D. Dunsky, and B. R. Safdi, “Upper limit +on the axion-photon coupling from magnetic white +dwarf polarization,” Phys. Rev. D 105 no. 10, (2022) +103034, arXiv:2203.04319 [hep-ph]. +[72] J. Jaeckel, P. C. Malta, and J. Redondo, “Decay +photons from the axionlike particles burst of type II +supernovae,” Phys. Rev. D 98 no. 5, (2018) 055032, +arXiv:1702.02964 [hep-ph]. +[73] A. Caputo, G. Raffelt, and E. Vitagliano, “Muonic +boson limits: Supernova redux,” Phys. Rev. D 105 +no. 3, (2022) 035022, arXiv:2109.03244 [hep-ph]. +[74] A. De Angelis, G. Galanti, and M. Roncadelli, +“Relevance of axion-like particles for very-high-energy +astrophysics,” Phys. Rev. D 84 (2011) 105030, +arXiv:1106.1132 [astro-ph.HE]. [Erratum: +Phys.Rev.D 87, 109903 (2013)]. +[75] J. Guo, H.-J. Li, X.-J. Bi, S.-J. Lin, and P.-F. Yin, +“Implications of axion-like particles from the +Fermi-LAT and H.E.S.S. observations of PG 1553+113 +and PKS 2155−304,” Chin. Phys. C 45 no. 2, (2021) +025105, arXiv:2002.07571 [astro-ph.HE]. +[76] H.-J. Li, J.-G. Guo, X.-J. Bi, S.-J. Lin, and P.-F. Yin, +“Limits on axion-like particles from Mrk 421 with +4.5-year period observations by ARGO-YBJ and +Fermi-LAT,” Phys. Rev. D 103 no. 8, (2021) 083003, +arXiv:2008.09464 [astro-ph.HE]. +[77] H.-J. Li, X.-J. Bi, and P.-F. Yin, “Searching for +axion-like particles with the blazar observations of +MAGIC and Fermi-LAT *,” Chin. Phys. C 46 no. 8, + +5 +(2022) 085105, arXiv:2110.13636 [astro-ph.HE]. +[78] J. Davies, M. Meyer, and G. Cotter, “Constraints on +axionlike particles from a combined analysis of three +flaring Fermi flat-spectrum radio quasars,” +arXiv:2211.03414 [astro-ph.HE]. +[79] J. Redondo and G. Raffelt, “Solar constraints on +hidden photons re-visited,” JCAP 1308 (2013) 034, +arXiv:1305.2920 [hep-ph]. +[80] A. Ayala, I. Dom´ınguez, M. Giannotti, A. Mirizzi, and +O. Straniero, “Revisiting the bound on axion-photon +coupling from Globular Clusters,” Phys. Rev. Lett. +113 no. 19, (2014) 191302, arXiv:1406.6053 +[astro-ph.SR]. +[81] M. J. Dolan, F. J. Hiskens, and R. R. Volkas, +“Advancing globular cluster constraints on the +axion-photon coupling,” JCAP 10 (2022) 096, +arXiv:2207.03102 [hep-ph]. +[82] N. Vinyoles, A. Serenelli, F. L. Villante, S. Basu, +J. Redondo, and J. Isern, “New axion and hidden +photon constraints from a solar data global fit,” JCAP +2015 no. 10, (Oct., 2015) 015–015, arXiv:1501.01639 +[astro-ph.SR]. +[83] W. DeRocco, S. Wegsman, B. Grefenstette, J. Huang, +and K. Van Tilburg, “First Indirect Detection +Constraints on Axions in the Solar Basin,” Phys. Rev. +Lett. 129 no. 10, (2022) 101101, arXiv:2205.05700 +[hep-ph]. +[84] H. An, F. P. Huang, J. Liu, and W. Xue, +“Radio-frequency Dark Photon Dark Matter across the +Sun,” Phys. Rev. Lett. 126 no. 18, (2021) 181102, +arXiv:2010.15836 [hep-ph]. +[85] M. P. van Haarlem et al., “LOFAR: The +LOw-Frequency ARray,” Astron. Astrophys. 556 +(2013) A2, arXiv:1305.3550 [astro-ph.IM]. +[86] G. Raffelt and L. Stodolsky, “Mixing of the Photon +with Low Mass Particles,” Phys. Rev. D37 (1988) +1237. +[87] P. E. Dewdney, P. J. Hall, R. T. Schilizzi, and T. J. L. +Lazio, “The square kilometre array,” Proceedings of the +IEEE 97 no. 8, (2009) 1482–1496. +[88] V. De La Luz, A. Lara, E. Mendoza, and M. Shimojo, +“3D Simulations of the Quiet Sun Radio Emission at +Millimeter and Submillimeter Wavelengths,” Geofisica +Internacional 47 (July, 2008) 197–203. +[89] S. K. Solanki, B. Inhester, and M. Sch¨ussler, “The +solar magnetic field,” Reports on Progress in Physics +69 no. 3, (2006) 563. +[90] M. J. Aschwanden, “Chapter 11 - The Sun,” in +Encyclopedia of the Solar System (Third Edition), +T. Spohn, D. Breuer, and T. V. Johnson, eds., +pp. 235–259. Elsevier, Boston, third edition ed., 2014. +https://www.sciencedirect.com/science/article/ +pii/B9780124158450000116. +[91] Z. Yang, C. Bethge, H. Tian, S. Tomczyk, R. Morton, +G. Del Zanna, S. W. McIntosh, B. B. Karak, +S. Gibson, T. Samanta, et al., “Global maps of the +magnetic field in the solar corona,” Science 369 +no. 6504, (2020) 694–697. +[92] P. F. de Salas, K. Malhan, K. Freese, K. Hattori, and +M. Valluri, “On the estimation of the Local Dark +Matter Density using the rotation curve of the Milky +Way,” JCAP 10 (2019) 037, arXiv:1906.06133 +[astro-ph.GA]. +[93] P. F. de Salas and A. Widmark, “Dark matter local +density determination: recent observations and future +prospects,” Rept. Prog. Phys. 84 no. 10, (2021) +104901, arXiv:2012.11477 [astro-ph.GA]. +[94] P. J. McMillan and J. J. Binney, “The uncertainty in +Galactic parameters,” Mon. Not. Roy. Astron. Soc. +402 (2010) 934, arXiv:0907.4685 [astro-ph.GA]. +[95] J. Bovy, D. W. Hogg, and H.-W. Rix, “Galactic +masers and the Milky Way circular velocity,” +Astrophys. J. 704 (2009) 1704–1709, arXiv:0907.5423 +[astro-ph.GA]. +[96] G. Thejappa and R. J. MacDowall, “Effects of +scattering on radio emission from the quiet sun at low +frequencies,” The Astrophysical Journal 676 no. 2, +(Apr, 2008) 1338. +https://dx.doi.org/10.1086/528835. +[97] E. P. Kontar, X. Chen, N. Chrysaphi, N. L. S. Jeffrey, +A. G. Emslie, V. Krupar, M. Maksimovic, +M. Gordovskyy, and P. K. Browning, “Anisotropic +radio-wave scattering and the interpretation of solar +radio emission observations,” The Astrophysical +Journal 884 no. 2, (Oct, 2019) 122. +[98] K. Arzner and A. Magun, “Radiowave propagation in +a statistically inhomogeneous plasma,” Astronomy and +Astrophysics 351 (Nov., 1999) 1165–1189. +[99] N. H. Bian, A. G. Emslie, and E. P. Kontar, “A +fokker–planck framework for studying the diffusion of +radio burst waves in the solar corona,” The +Astrophysical Journal 873 no. 1, (Mar, 2019) 33. +https://dx.doi.org/10.3847/1538-4357/ab0411. +[100] G. Cowan, K. Cranmer, E. Gross, and O. Vitells, +“Asymptotic formulae for likelihood-based tests of new +physics,” Eur. Phys. J. C 71 (2011) 1554, +arXiv:1007.1727 [physics.data-an]. [Erratum: +Eur.Phys.J.C 73, 2501 (2013)]. +[101] H. An, S. Ge, W.-Q. Guo, X. Huang, J. Liu, and +Z. Lu, “Direct detection of dark photon dark matter +using radio telescopes,” arXiv:2207.05767 [hep-ph]. +[102] B. Godfrey et al., “Search for dark photon dark +matter: Dark E field radio pilot experiment,” Phys. +Rev. D 104 no. 1, (2021) 012013, arXiv:2101.02805 +[physics.ins-det]. +[103] L. Hoang Nguyen, A. Lobanov, and D. Horns, “First +results from the WISPDMX radio frequency cavity +searches for hidden photon dark matter,” JCAP 1910 +no. 10, (2019) 014, arXiv:1907.12449 [hep-ex]. +[104] M. Betz, F. Caspers, M. Gasior, M. Thumm, and +S. W. Rieger, “First results of the CERN Resonant +Weakly Interacting sub-eV Particle Search +(CROWS),” Phys. Rev. D 88 no. 7, (2013) 075014, +arXiv:1310.8098 [physics.ins-det]. +[105] K. Ehret et al., “New ALPS Results on Hidden-Sector +Lightweights,” Phys. Lett. B 689 (2010) 149–155, +arXiv:1004.1313 [hep-ex]. +[106] OSQAR Collaboration, R. Ballou et al., “New +exclusion limits on scalar and pseudoscalar axionlike +particles from light shining through a wall,” Phys. Rev. +D 92 no. 9, (2015) 092002, arXiv:1506.08082 +[hep-ex]. +[107] CAST Collaboration, V. Anastassopoulos et al., “New +CAST Limit on the Axion-Photon Interaction,” +Nature Phys. 13 (2017) 584–590, arXiv:1705.02290 +[hep-ex]. +[108] N. Crisosto, P. Sikivie, N. S. Sullivan, D. B. Tanner, +J. Yang, and G. Rybka, “ADMX SLIC: Results from a + +6 +Superconducting LC Circuit Investigating Cold +Axions,” Phys. Rev. Lett. 124 no. 24, (2020) 241101, +arXiv:1911.05772 [astro-ph.CO]. +[109] M. L. Kaiser, T. Kucera, J. Davila, O. St Cyr, +M. Guhathakurta, and E. Christian, “The stereo +mission: An introduction,” Space Science Reviews 136 +no. 1, (2008) 5–16. +[110] M. Pulupa, S. D. Bale, J. W. Bonnell, T. A. Bowen, +N. Carruth, K. Goetz, D. Gordon, P. R. Harvey, +M. Maksimovic, J. C. Mart´ınez-Oliveros, +M. Moncuquet, P. Saint-Hilaire, D. Seitz, and +D. Sundkvist, “The solar probe plus radio frequency +spectrometer: Measurement requirements, analog +design, and digital signal processing,” Journal of +Geophysical Research: Space Physics 122 no. 3, (3, +2017) 2836–2854. https://onlinelibrary.wiley. +com/doi/10.1002/2016JA023345. +[111] T. W. Shimwell et al., “The LOFAR Two-metre Sky +Survey: I. Survey Description and Preliminary Data +Release,” Astron. Astrophys. 598 (2017) A104, +arXiv:1611.02700 [astro-ph.IM]. +[112] E. P. Kontar, S. Yu, A. A. Kuznetsov, A. G. Emslie, +B. Alcock, N. L. S. Jeffrey, V. N. Melnik, N. H. Bian, +and P. Subramanian, “Imaging Spectroscopy of Solar +Radio Burst Fine Structures,” Nature Commun. 8 +no. 1, (2017) 1515, arXiv:1708.06505 [astro-ph.SR]. + diff --git a/VtE2T4oBgHgl3EQfDQZP/content/tmp_files/load_file.txt b/VtE2T4oBgHgl3EQfDQZP/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..a7806b673cbf29648f6e45c5217ac7212c0f9634 --- /dev/null +++ b/VtE2T4oBgHgl3EQfDQZP/content/tmp_files/load_file.txt @@ -0,0 +1,1251 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf,len=1250 +page_content='Searching for Ultralight Dark Matter Conversion in Solar Corona using LOFAR Data Haipeng An,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' ∗ Xingyao Chen,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' † Shuailiang Ge,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 6,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' ‡ Jia Liu,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='6,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' § and Yan Luo6,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' ¶ 1Department of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Tsinghua University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Beijing 100084,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' China 2Center for High Energy Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Tsinghua University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Beijing 100084,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' China 3Center for High Energy Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Peking University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Beijing 100871,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' China 4Frontier Science Center for Quantum Information,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Beijing 100084,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' China 5School of Physics & Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' University of Glasgow,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Glasgow,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' G12 8QQ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' UK 6School of Physics and State Key Laboratory of Nuclear Physics and Technology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Peking University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Beijing 100871,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' China Ultralight axions and dark photons are well-motivated dark matter (DM) candidates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The axion DM and dark photon DM (DPDM) can resonantly convert into electromagnetic (EM) waves in the solar corona when their mass is equal to the solar plasma frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The resultant EM waves are mono-chromatic in the radio-frequency range with an energy equal to the DM mass, which can be detected via radio telescopes for solar observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We search for converted mono-chromatic signals in the observational data of the high-sensitivity Low Frequency Array (LOFAR) telescope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We find the upper limit on the kinetic mixing coupling between DPDM and photon can reach 10−13 in the frequency range 30−80 MHz, which is about one order of magnitude better than the existing constraint from the cosmic microwave background (CMB) observation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' In addition, we also get the upper limit on the axion-photon coupling in the same frequency range, which is better than the constraints from Light-Shining-through-a-Wall experiments but does not exceed the CAST or other astrophysical bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' INTRODUCTION Due to the null results of searching for WIMPs [1–3], the ultralight dark matter (DM) candidates, including QCD axions, axion-like particles, and dark photons, have been attracting more attention than ever.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' QCD axion was first proposed to solve the strong CP problem [4–7], which was later shown to be a good DM candidate [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Axion-like particles arising in, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', string-theory mod- els [9], coupled to SM particles in a similar way, can also be a good DM candidate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Axion or axion-like particles can be generated by the misalignment mechanism [10– 12] or the decay of topological objects [13, 14] in the early Universe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Dark photon is a vector ultralight DM candidate [15–18], which is one of the simplest exten- sions of SM by adding a massive vector field coupled to photon field via the kinetic mixing marginal opera- tor [19–24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' There are several ways to produce the right amount of DPDM in the early Universe, including the misalignment mechanism with a non-minimal coupling to the Ricci scalar [16, 17, 25–27], inflationary fluctua- tions [18, 28–37], parametric resonances [38–43], or the decay of the cosmic strings [44].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The couplings of axion or dark photon with SM parti- cles provide important tools in searching for these ultra- light particles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Various types of experiments are looking for the signals connected with photons, including halo- scopes for Galactic halo DM [45, 46], helioscopes for ul- tralight particles emitted from the Sun [45, 46], “Light Shining through the wall” (LSW) methods [47, 48], and various astrophysical bounds such as CMB spectral dis- tortion constraints on dark photons [17, 49], gamma-ray constraints on axion DM [50, 51] and stellar lifetime con- straints on dark photons [52, 53].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Axions and dark pho- tons can also be detected via WIMP detectors [54, 55].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Moreover, many experimental results for axion DM can be reinterpreted for dark photons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We refer the readers to Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [56, 57] for an excellent summary of experimen- tal constraints (including projected ones) for axions and dark photons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The other meaningful way to look for axions or dark photons is to check the anomalous signals in various astrophysical environments, such as neutron stars [58– 68], white dwarfs [68–71], supernovae [72, 73], quasars and blazars [74–78], red giants and horizontal branch stars [79], and globular clusters [80, 81].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' These searches assume that the ultralight particles are either DM or sourced inside the astrophysical objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' As the closest star to us, the Sun also serves as a good laboratory for ultralight particles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Previous works set constraints on ul- tralight particles generated inside the Sun via the stellar cooling [52, 79, 82] and the axion decay [83].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' On the other hand, Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [84] proposed that DPDM can resonantly con- vert into mono-chromatic radio-frequency EM waves in the solar corona at a radius where the plasma frequency equals the DPDM mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' With the presence of the solar magnetic field, in the solar corona, axion DM can also resonantly convert into radio waves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' In this work, we search for such mono-chromatic radio signal in the solar observation data of LOFAR telescope [85].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' To calculate the signal, we carry out the simulation of the propaga- tion of EM waves inside solar corona of quiet Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We then compare the signal with LOFAR data to calculate the upper limits on DPDM model and axion DM model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='03622v1 [hep-ph] 9 Jan 2023 2 MODELS We consider both the DPDM model and the axion DM model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' For the DPDM model, the DPDM interacts with the SM particles through the kinetic mixing, and the La- grangian can be written as LA′γ = −1 4F ′ µνF ′µν + 1 2m2 A′A′ µA′µ − 1 2ϵFµνF ′µν, (1) where Fµν and F ′µν are the photon and dark photon field strength, respectively, and ϵ is the kinetic mixing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' In the axion DM model, the axion a, as a pseudo-scalar, interacts with the SM photon via Laγ = 1 2∂µa∂µa − 1 2m2 aa2 + 1 4gaγγaFµν ˜F µν, (2) where ˜F µν ≡ εµναβFαβ/2 is the dual EM field strength.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' gaγγ is the coupling strength between the axion and EM field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The last term in (2) can be simplified as −gaγγaE· B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' In solar corona, the free electrons generate a plasma frequency ωp, which serves as the effective mass for the EM wave.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' In the non-relativistic plasma, it is determined by the free electron density ne as, ωp = �4παne me �1/2 = � ne 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='3 × 108 cm−3 �1/2 µeV, (3) where α is the fine structure constant and me is the elec- tron mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' When a dark photon A′ propagates in the plasma, it can resonantly convert into an SM photon if ωp ≈ mA′ [84].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' In the presence of magnetic field, ax- ions a can also resonantly convert into EM field [86].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' In the solar corona, we have a monotonically decreasing ne from 1010 to 106 cm−3 with the height above the solar photosphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Therefore, the corresponding plasma fre- quency scans from 4 × 10−6 to 4 × 10−8 eV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' If the DM mass mdm falls in this range, the resonant conversion of DPDM or axion DM into EM waves can occur at a specific radius rc satisfying ωp(rc) = mDM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Since the DM is non-relativistic in the galactic halo, the frequency of the converted EM wave is mono-chromatic and equals mDM/2π, with a spread of about 10−6 of its central value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Therefore, the frequency of the EM waves is in the range of radio frequency from 10−1000 MHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Therefore, it can be tested by various radio telescopes with solar physics programs, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', LOFAR [85] and SKA [87].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We assume a static and spherical solar model [88] for the quiet Sun, and the DM wind (either DPDM or axion DM) constantly passing through the solar atmo- sphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The resonant conversion probabilities for the DPDM model and the axion DM model in the solar corona are [84, 86] PA′→γ(vrc) = 2 3 × πϵ2mA′v−1 rc ����� ∂ ln ω2 p(r) ∂r ����� −1 r=rc , (4) and Pa→γ(vrc) = π g2 aγγ |BT |2 ma v−1 rc ����� ∂ ln ω2 p(r) ∂r ����� −1 r=rc , (5) respectively, where vrc is the radial velocity at the res- onant layer, and BT is the magnetic field transverse to the direction of the axion propagation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The probabilities in the two cases are related via � 2 3ϵm2 A′ ⇔ gaγγ |BT | ma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' (6) The prefactor 2/3 in (4) is because the longitudinal mode of massive photons cannot propagate out of plasma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Whereas in the axion DM case, the converted photons are all in transverse modes with polarization in parallel to BT [70, 86].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The Sun has a dipole-like magnetic field but suffers from large fluctuations [89, 90].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The global map of the magnetic field in solar corona obtained using the tech- nique of the Coronal Multi-channel Polarimeter shows that the magnetic field strength is about 1-4 Gauss in the corona at the distance of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='05-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='35 R⊙ [91].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' However, the map also shows strong magnetic field inhomogeneity de- pending on location, altitude, and time, which makes it difficult to accurately calculate the axion-photon conver- sion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Consequently, we take a compromise by using the conservative value of 1 Gauss for |BT | in the calculation of the axion conversion in (6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' With the conversion probability, the radiation power P per solid angle dΩ at the conversion layer can be derived as dP dΩ = � dv0fDM(v0)PX→γ(v0)ρDMv(rc)r2 c, (7) where the DM density is ρDM = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='3 GeV cm−3 [92, 93] and v0, the initial DM velocity follows a Maxwellian distribution fDM(v0) with the most probable velocity of 235 km/s [94, 95].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' v(rc) = � v2 0 + 2GNM⊙/rc is the DM velocity at the conversion layer including the gravitational effect of the Sun, with GN the gravitational constant and M⊙ the solar mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' PROPAGATION The converted EM waves propagating through the corona will experience interactions with the plasma, in- cluding both absorption and scattering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The absorp- tion of the converted photons is mainly via the inverse bremsstrahlung process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Due to refraction, the con- verted EM wave would propagate along the radial di- rection soon after it flies out of the resonant region if there were no scatterings between the EM wave and the 3 plasma [84].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' However, the scattering effect in the inho- mogeneous plasma will randomize the direction of EM waves, which will lead to a broadened angular distribu- tion of outgoing EM waves [96, 97].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The LOFAR ob- servation using the interferometry mode has a very nice angular resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The field of view (FOV) of each LO- FAR beam is significantly smaller than the total angular span of the Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Thus, we expect the scattering effect to suppress the signal observed by the LOFAR detector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' With the absorption and scattering effects taken into account, the spectral flux density received by LOFAR can be written as Ssig = 1 B 1 d2 dP dΩPsur(f)β(f) (8) where d = 1 AU is the distance from the Earth to the Sun and B is the bandwidth, which is the larger one between the DM dispersion bandwidth Bsig and the tele- scope spectral resolution Bres.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' In our case, Bsig < Bres, so we have B = Bres.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' In (8), Psur is the surviving prob- ability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' For each converted photon, Psur also depends on the path it travels, and therefore numerical simulation is needed to calculate Psur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The β factor in (8) parameter- ize the scattering effect and is defined as β(f) = d2 R2 S � beam g(θ1, φ1) r2 dS , (9) where g(θ1, φ1) is the angular distribution function of the scattered photons at the last scattering radius RS, beyond which the scattering process can be neglected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' RS(f) is determined by numerical simulation, which is about 5 to 7R⊙, slightly depending on the photon fre- quency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The integration in (9) is over the last scattering surface, and r is the distance from the integrated sur- face element dS to LOFAR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The detailed derivation and computation of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' (8) require some involved but basic geometric analysis and are presented in appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We use the Monte Carlo ray-tracing method devel- oped in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [97] to simulate the propagation of the converted photon in the corona plasma, including both the absorption and scattering effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We describe the radio-wave scattering process using the Fokker- Planck and Langevin equations based on the Hamilton equations for photons [97–99].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' In our simulation, we follow Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [96] to use the Kolmogorov spectrum for the electron density fluctuation for the quiet Sun with δne/ne = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Moreover, we consider the magnitude of anisotropic density fluctuation as αanis = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='1 [96], αanis is the anisotropy parameter defined as the ratio between the perpendicular and parallel correlation lengths [97].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Then for each frequency, we calculate Psur(f) and β(f), and simulation results are presented in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We see that the absorption effect is more substantial at higher frequencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The same is true for the smearing effect, mainly because the FOV of LOFAR decreases with increasing frequency, which makes the smearing effect 30 40 50 60 70 80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='1 f [MHz] Psur β Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The survival probability Psur (cyan) and the smear- ing factor β (purple) as functions of the photon frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' more apparent at higher frequencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' LOFAR DATA ANALYSIS LOFAR is an advanced radio interferometer with high resolution and sensitivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We use the observation data of the beam-formed mode [85], which combines the 24 LOFAR core stations in the Netherlands to form 127 tied-array beams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' This mode increases the sensitivity re- markably.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' However, the 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='5 km baseline of LOFAR core size also restricts the FOV to only ∼ 5′ at 32 MHz [85].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The observation data we use is the spectral flux density calibrated in solar flux unit (sfu) within the frequency range of 30-80 MHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Since some beams are outside the solar surface, only beams with fluxes larger than half of the maximum beam flux are selected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We have data from three different observation periods with the same obser- vation duration of 17 minutes, which were carried out on April 25, 2015, July 3, 2015, and September 3, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The bandwidth is Bres = 97 kHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We take an average of the data in the selected beams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The averaged data distributes along 516 frequency bins, each consisting of 6000 time bins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We clean the data using the following procedure to eliminate the burst-like noises.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' First, we divide the 6000 time series into 150 in- tervals, with each interval containing 40 time bins enough to reflect the statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We choose the interval with the lowest mean as the reference interval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Then we com- pare the mean µt and standard deviation σt of each in- terval with that of the reference interval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We only col- lect the intervals that satisfy the conditions µt[test] < µt[ref] + 2σt[ref] and σt[test] < 2σt[ref].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' This data- cleaning process only removes transient noises but not the time-independent ultralight DM signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' After data cleaning, for each frequency bin, i, we can get the average value ¯Oi and the standard deviation σ ¯ Oi as the statistical uncertainty of the time series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We pa- 4 rameterize the background locally by fitting each fre- quency bin and its adjacent k bins with a polynomial function of degree n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' In practice, we choose k = 10 and n = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Then, we use the least square method to evaluate the deviation of data to the background fit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The fit- ting deviation is taken to be the systematic uncertainty σsys i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The total uncertainty is in the quadrature form, σ2 i = σ2 ¯ Oi + (σsys i )2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' It turns out that σsys i always domi- nates in σi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We adopt the log-likelihood ratio test method [100] to set upper limits on DPDM/axion DM parameter space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We build the likelihood function for the frequency bin i0 in the Gaussian form [101] L(S, a) = i0+5 � i=i0−5 1 σi √ 2π exp � −1 2 �B(a, fi) + Sδii0 − ¯Oi σi �2� , (10) where B(a, fi) is the polynomial function for background fitting, the coefficients a = (a1, a2, a3) are treated as nui- sance parameters, and S is the assumed DPDM/axion DM signal at bin i0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Then we can build the following test statistic [100, 101] qS = � −2 ln � L(S,˜a) L( ˆS,ˆa) � , ˆS ≤ S 0, ˆS > S .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' (11) In the denominator, the likelihood L gets maximized at a = ˆa and S = ˆS;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' in the numerator, L gets maximized at a = ˜a for a specified S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The test statistic qS follows the half-χ2 distribution: h(qS|S) = 1 2δ(qS) + 1 2 1 √ 2π 1 √qS e−qS/2, (12) the cumulative distribution function of which is H(qS|S) = 1 2 + 1 2erf( � qS/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Then, we can define the following criterion [100, 101]: pS = 1 − erf( � qS/2) 1 − erf( � q0/2) , (13) which measures how far the assumed signal is away from the null result S = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We set pS = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='05 to get the 95% confidence level (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=') upper limit, Slim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The results of Slim as functions of frequency are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 2, in which the datasets we use are from three observation periods shown in different colors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Note that there are high peaks at some frequencies (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', 40 MHz) in the plot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Solar bursts may induce this at these frequencies, which weaken the upper limits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We calculate the 95% C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' upper limits on ϵ for DPDM model and on gaγγ for the axion DM model by requir- ing Slim equal to Ssig in (8) for each dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Among the constraints from the three datasets, we choose the Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 25 Jun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 03 Sept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 03 30 40 50 60 70 80 10-25 10-24 10-23 10-22 10-21 f [MHz] Slim [W/m2/Hz] Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Model-independent 95% C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' upper limits from LOFAR data on a constant mono-chromatic signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The blue, orange, and green limits represent the observation data on April 25, 2015, July 3, 2015, and September 3, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' strongest one at each frequency bin as the final upper limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The limit from LOFAR data for DPDM is plot- ted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 2, which shows that the upper limit on ϵ is about 10−13 in the frequency range 30 − 80 MHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' It is about one order of magnitude better than the existing CMB constraint [17, 49], and is complementary to other searches for DPDM with higher frequency, such as the Dark E-field experiment [102].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=" 1 2 3 4 5 6 10-14 10-13 10-12 10-11 10-10 30 40 50 60 70 80 90 100 120 mA' [10-7eV] ϵ × (ρA'/ρDM)0." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='5 f [MHz] LOFAR CMB Distortions Dark E-field WISPDMX Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 95% C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' upper limit on kinetic mixing ϵ for dark photon DM from 17 minutes observation of LOFAR data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We also show the existing constraints from the CMB distor- tion [17, 49] and the haloscope searches WISPDMX [103] and Dark E-field experiments [102].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The upper limit for DPDM can be readily trans- lated into the upper limit for the axion DM model using the relation in (6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' To be conservative, we take |BT | = 1 Gauss, which is the lower value given in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [91] and the constraint on gaγγ is plotted as the solid line in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' In comparison, we also plot the result as the dashed line for |BT | = 4 Gauss, which is the upper 1 1 2 3 4 5 6 10-12 10-11 10-10 10-9 10-8 10-7 10-6 30 40 50 60 70 80 90 100 120 ma [10-7 eV] gaγγ [GeV-1] × (ρa/ρDM)0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='5 f [MHz] CROWS ALPS OSQAR CAST MWDP Globular Clusters Pulsars QBs ADMX SLIC LOFAR Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 95% C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' upper limit on axion-photon coupling gaγγ from 17 minutes observation of LOFAR data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The solid (dashed) blue lines are for |BT | = 1(4) Gauss respec- tively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We also show the existing constraints from various ex- periments and astrophysical observations (the data are from the website [57]), including Light-Shining-through-a-Wall ex- periments: CROWS [104], ALPS [105] and OSQAR [106];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' helioscope: CAST [107];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' haloscope: ADMX SLIC [108];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' astrophysical bounds: magnetic white dwarf polarization (MWDP) [71], Globular Clusters [80, 81], pulsars [64], as well as quasars and blazars (QBs, shown in dashed gray) [76–78].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' value of the solar corona magnetic field in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [91].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The large uncertainty of the magnetic field overshadows other statistical and systematic uncertainties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Therefore, the zig-zag features shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 3 become less meaningful in the axion DM case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' As a result, in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 4, we average the upper limits over every 20 frequency bins to indicate the sensitivity of the LOFAR data on axion DM model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We see that our result exceeds the existing constraints from Light-Shining-through-a-Wall experiments, includ- ing CROWS [104], ALPS [105] and OSQAR [106], but is not as competitive as the direct search experiments such as CAST [107] or ADMX SLIC [108] (in very narrow bands), and the astrophysical bounds from observations of magnetic white dwarf polarization [71], Globular Clusters [80, 81], pulsars [64], as well as quasars and blazars [76–78].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' SUMMARY AND OUTLOOK When DPDM or axion DM pass across the Sun, they can resonantly convert into EM waves in the solar corona.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We numerically simulated the converted photons propa- gating in the plasma, including the effects of absorption and scattering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Radio telescopes for solar observations can detect mono-chromatic converted EM waves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We used the three datasets of 17-minute observation data from LOFAR to search for such signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We found that this method sets a stringent limit for the kinetic mix- ing dark photon, ϵ ∼ 10−13, in the frequency range 30 − 80 MHz, which is about one order of magnitude stronger than the CMB constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Similarly, we ob- tain an upper limit for the axion-photon coupling gaγγ for the axion DM model in the same frequency range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The constraint on gaγγ is better than that from Light- Shining-through-a-Wall experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' However, it does not exceed the bounds from the CAST experiment, the haloscope-type experiment ADMX SLIC and other astro- physical bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The LOFAR data analysis in this work shows great potential in searching for ultralight DM with radio telescopes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' With greater sensitivity, we expect fu- ture radio programs such as the SKA telescope will have better sensitivity on DPDM and axion DM search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Ter- restrial radio telescopes cannot search for DPDM with a frequency smaller than 10 MHz due to the screening effect from the ionosphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' In this case, we may use solar probes, such as the STEREO [109] satellite and Parker Solar Probe [110], carrying radio spectrometers, to search for them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Acknowledgment– The authors would like to thank Eduard Kontar for the helpful discussions and espe- cially the interpretation of the data format and cali- brations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The work of HA is supported in part by the National Key R&D Program of China under Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 2021YFC2203100 and 2017YFA0402204, the NSFC un- der Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 11975134, and the Tsinghua University Initiative Scientific Research Program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The work of SG is supported by the International Postdoctoral Exchange Fellowship Program and the Boya Postdoctoral Fellow- ship of Peking University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The work of JL is supported by NSFC under Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 12075005, 12235001, and by Peking University under startup Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 7101502458.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Appendix: The simulation of the scattering and absorption effects In this appendix, we derive the effective spectral flux density (8) received by LOFAR stations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Firstly, the Field of View (FOV) of LOFAR, or effec- tively, the Full Width Half Maximum (FWHM) of LO- FAR, is determined by FWHM = α × λ D, (S1) where λ is the observation wavelength, the coefficient α = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='02 [111], and D ≃ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='5 km is the station diame- ter according to Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [112].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Therefore, the FWHM (for one beam) is about 10−3 rad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We can effectively define the last scattering sphere of radius RS, beyond which the scattering effect can be ig- nored, so the radio waves propagate in straight lines for 2 r > RS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The total radiation power for dark photon sig- nal at frequency f after conversion is dP/dΩ × 4π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' So the survived power at the last scattering sphere is P = Psur(f)4π dP dΩ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' (S2) dΩ1 dA1 dA2 P2 P1 ⃗r RS RC N1 N2 θ1 θ2 θ d Figure S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Schematic diagram of the propagation of photons after the last scattering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' RC denotes the conversion layer;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' RS denotes the last scattering sphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' A surface element dA1 containing a point P1 as the radiation source on the last scat- tering sphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' A surface element dA2 containing P2 is the detection area on the Earth, which defines a solid angle dΩ1 about P1 in the direction of r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' θi is the angle between the propagation vector r and the normal vector Ni of dAi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' We set the direction of N2 along the line between the centers of the Sun and the Earth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Considering a virtual source point P1 within a sur- face element dA1 on the last scattering sphere (see the schematic diagram Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' S1), the power it radiates in the direction r is dP′ = P dA1 4πR2 S g(θ1, φ1)dΩ1, (S3) where the angular distribution function g(θ1, φ1) ac- counts for the fact that after multiple random scatter- ing events, the radiation from the surface element is not simply in the radial direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' g(θ1, φ1) is normalized as 1 = � g(θ1, φ1)dΩ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' (S4) The relation dΩ1 = dA2 cos θ2/r2 is useful where the co- sine accounts for converting the receiving area dA2 to the projected area normal to r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Then, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' (S3) becomes dP′ = P dA1 4πR2 S g(θ1, φ1)dA2 cos θ2 r2 , (S5) where r is the distance from the surface element to the Earth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Meanwhile, θ2 is of oder 10−3 rad so that cos θ2 ≃ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Substituting Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' (S2) into Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' (S5) and integrating over the area on the last scattering sphere covered by the beams, we arrive at the effective spectral flux den- sity (power per unit area and unit frequency) received by LOFAR: Ssig = Psur 1 B 1 R2 S dP dΩ � beam g(θ1, φ1) r2 dA1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' (S6) As discussed in the main text, the angular distribution function g(θ1, φ1) can be obtained by numerical simula- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The integration is performed in the spherical coor- dinates (θ, φ) with the Solar center as the origin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' There- fore, it can be transformed into Ssig = Psur 1 d2 1 B dP dΩ � beam g(θ1, φ1) sin θ1 cos θ2 dθ1dφ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' (S7) where d = 1 AU is the distance from the Earth to the Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' cos θ2 = � 1 − R2 S sin θ1 2/d2 is the geometric rela- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' The RS dependence in cos θ2 is canceled out by the implicit RS dependence in g(θ1, φ1, RS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' For the simplest case with no scattering, g(θ1, φ1) = δ(θ1)/(2π sin θ1), Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' (S7) becomes Ssig = Psur · 1/d2 · 1/B · dP/dΩ as expected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' It is worth noting that as the data is averaged over the beams with flux larger than 50% of the maxi- mum beam flux, the spherical surface integral is over the area covered by these selected beams and then divided by the number of selected beams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' ∗ anhp@mail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='tsinghua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='cn † Xingyao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='Chen@glasgow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='uk ‡ sge@pku.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='cn § jialiu@pku.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='cn ¶ ly23@stu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='pku.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='cn [1] LUX Collaboration, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Akerib et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', “Results from a search for dark matter in the complete LUX exposure,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 118 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 2, (2017) 021303, arXiv:1608.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='07648 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='CO].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [2] XENON Collaboration, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Aprile et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', “Dark Matter Search Results from a One Ton-Year Exposure of XENON1T,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 121 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 11, (2018) 111302, arXiv:1805.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='12562 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='CO].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [3] PandaX-4T Collaboration, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Meng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', “Dark Matter Search Results from the PandaX-4T Commissioning Run,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 127 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 26, (2021) 261802, arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='13438 [hep-ex].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [4] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Peccei and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Quinn, “CP Conservation in the Presence of Instantons,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 38 (1977) 1440–1443.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [,328(1977)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [5] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Peccei and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Quinn, “Constraints Imposed by CP Conservation in the Presence of Instantons,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 16 (1977) 1791–1797.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [6] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Weinberg, “A New Light Boson?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=',” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 40 (1978) 223–226.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [7] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Wilczek, “Problem of Strong P and T Invariance in the Presence of Instantons,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 40 (1978) 279–282.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [8] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Ipser and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Sikivie, “Can Galactic Halos Made of Axions?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=',” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 50 (1983) 925.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 3 [9] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Svrcek and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Witten, “Axions In String Theory,” JHEP 06 (2006) 051, arXiv:hep-th/0605206 [hep-th].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [10] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Preskill, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Wise, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Wilczek, “Cosmology of the Invisible Axion,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' B120 (1983) 127–132.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [,URL(1982)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [11] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Abbott and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Sikivie, “A Cosmological Bound on the Invisible Axion,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' B120 (1983) 133–136.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [,URL(1982)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [12] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Dine and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Fischler, “The Not So Harmless Axion,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' B120 (1983) 137–141.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [,URL(1982)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [13] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Vilenkin and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Everett, “Cosmic strings and domain walls in models with goldstone and pseudo-goldstone bosons,” Physical Review Letters 48 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 26, (1982) 1867.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [14] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Sikivie, “Axions, domain walls, and the early universe,” Physical Review Letters 48 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 17, (1982) 1156.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [15] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Redondo and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Postma, “Massive hidden photons as lukewarm dark matter,” JCAP 02 (2009) 005, arXiv:0811.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='0326 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [16] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Nelson and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Scholtz, “Dark Light, Dark Matter and the Misalignment Mechanism,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D84 (2011) 103501, arXiv:1105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='2812 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [17] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Arias, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Cadamuro, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Goodsell, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Jaeckel, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Redondo, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Ringwald, “WISPy Cold Dark Matter,” JCAP 1206 (2012) 013, arXiv:1201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='5902 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [18] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Graham, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Mardon, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rajendran, “Vector Dark Matter from Inflationary Fluctuations,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D93 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 10, (2016) 103520, arXiv:1504.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='02102 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [19] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Holdom, “Two U(1)’s and Epsilon Charge Shifts,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 166B (1986) 196–198.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [20] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Dienes, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Kolda, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' March-Russell, “Kinetic mixing and the supersymmetric gauge hierarchy,” Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' B 492 (1997) 104–118, arXiv:hep-ph/9610479.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [21] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Abel and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Schofield, “Brane anti-brane kinetic mixing, millicharged particles and SUSY breaking,” Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' B 685 (2004) 150–170, arXiv:hep-th/0311051.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [22] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Abel, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Goodsell, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Jaeckel, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Khoze, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Ringwald, “Kinetic Mixing of the Photon with Hidden U(1)s in String Phenomenology,” JHEP 07 (2008) 124, arXiv:0803.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='1449 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [23] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Abel, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Jaeckel, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Khoze, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Ringwald, “Illuminating the Hidden Sector of String Theory by Shining Light through a Magnetic Field,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' B 666 (2008) 66–70, arXiv:hep-ph/0608248.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [24] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Goodsell, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Jaeckel, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Redondo, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Ringwald, “Naturally Light Hidden Photons in LARGE Volume String Compactifications,” JHEP 11 (2009) 027, arXiv:0909.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='0515 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [25] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Alonso-´Alvarez, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Hugle, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Jaeckel, “Misalignment & Co.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=': (Pseudo-)scalar and vector dark matter with curvature couplings,” arXiv:1905.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='09836 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [26] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Nakayama, “Vector Coherent Oscillation Dark Matter,” JCAP 1910 (2019) 019, arXiv:1907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='06243 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [27] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Nakayama, “Constraint on Vector Coherent Oscillation Dark Matter with Kinetic Function,” JCAP 08 (2020) 033, arXiv:2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='10036 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [28] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Ema, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Nakayama, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Tang, “Production of Purely Gravitational Dark Matter: The Case of Fermion and Vector Boson,” JHEP 07 (2019) 060, arXiv:1903.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='10973 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [29] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Kolb and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Long, “Completely dark photons from gravitational particle production during the inflationary era,” JHEP 03 (2021) 283, arXiv:2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='03828 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='CO].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [30] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Salehian, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Gorji, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Firouzjahi, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Mukohyama, “Vector dark matter production from inflation with symmetry breaking,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 103 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 6, (2021) 063526, arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='04491 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [31] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Ahmed, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Grzadkowski, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Socha, “Gravitational production of vector dark matter,” JHEP 08 (2020) 059, arXiv:2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='01766 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [32] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Nakai, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Namba, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Wang, “Light Dark Photon Dark Matter from Inflation,” JHEP 12 (2020) 170, arXiv:2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='10743 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [33] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Nakayama and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Tang, “Gravitational Production of Hidden Photon Dark Matter in Light of the XENON1T Excess,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' B 811 (2020) 135977, arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='13159 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [34] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Firouzjahi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Gorji, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Mukohyama, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Salehian, “Dark photon dark matter from charged inflaton,” JHEP 06 (2021) 050, arXiv:2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='06324 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [35] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Bastero-Gil, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Santiago, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Ubaldi, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Vega-Morales, “Dark photon dark matter from a rolling inflaton,” JCAP 02 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 02, (2022) 015, arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='12145 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [36] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Firouzjahi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Gorji, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Mukohyama, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Talebian, “Dark matter from entropy perturbations in curved field space,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 105 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 4, (2022) 043501, arXiv:2110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='09538 [gr-qc].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [37] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Sato, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Takahashi, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Yamada, “Gravitational production of dark photon dark matter with mass generated by the Higgs mechanism,” arXiv:2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='11896 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [38] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Co, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Pierce, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Zhang, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Zhao, “Dark Photon Dark Matter Produced by Axion Oscillations,” arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='07196 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [39] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Dror, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Harigaya, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Narayan, “Parametric Resonance Production of Ultralight Vector Dark Matter,” arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='07195 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [40] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Bastero-Gil, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Santiago, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Ubaldi, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Vega-Morales, “Vector dark matter production at the end of inflation,” arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='07208 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [41] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Agrawal, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Kitajima, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Reece, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Sekiguchi, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Takahashi, “Relic Abundance of Dark Photon Dark Matter,” arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='07188 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [42] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Co, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Harigaya, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Pierce, “Gravitational waves and dark photon dark matter from axion rotations,” JHEP 12 (2021) 099, arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='02077 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [43] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Nakayama and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Yin, “Hidden photon and axion dark matter from symmetry breaking,” JHEP 10 (2021) 026, arXiv:2105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='14549 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [44] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Long and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Wang, “Dark Photon Dark Matter from a Network of Cosmic Strings,” arXiv:1901.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='03312 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [45] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Sikivie, “Experimental Tests of the Invisible Axion,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 51 (1983) 1415–1417.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 4 [Erratum: Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 52, 695 (1984)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [46] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Sikivie, “Detection Rates for ’Invisible’ Axion Searches,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 32 (1985) 2988.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [Erratum: Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='D 36, 974 (1987)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [47] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Okun, “LIMITS OF ELECTRODYNAMICS: PARAPHOTONS?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=',” Sov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' JETP 56 (1982) 502.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [Zh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Eksp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Teor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Fiz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='83,892(1982)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [48] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Van Bibber, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Dagdeviren, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Koonin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Kerman, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Nelson, “Proposed experiment to produce and detect light pseudoscalars,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 59 (1987) 759–762.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [49] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' McDermott and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Witte, “Cosmological evolution of light dark photon dark matter,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D101 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 6, (2020) 063030, arXiv:1911.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='05086 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [50] Fermi-LAT Collaboration, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Ajello et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', “Search for Spectral Irregularities due to Photon–Axionlike-Particle Oscillations with the Fermi Large Area Telescope,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 116 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 16, (2016) 161101, arXiv:1603.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='06978 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='HE].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [51] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Meyer and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Petrushevska, “Search for Axionlike-Particle-Induced Prompt γ-Ray Emission from Extragalactic Core-Collapse Supernovae with the Fermi Large Area Telescope,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 124 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 23, (2020) 231101, arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='06722 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='HE].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [Erratum: Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 125, 119901 (2020)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [52] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' An, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Pospelov, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Pradler, “New stellar constraints on dark photons,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' B 725 (2013) 190–195, arXiv:1302.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='3884 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [53] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' An, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Pospelov, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Pradler, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Ritz, “New limits on dark photons from solar emission and keV scale dark matter,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 102 (2020) 115022, arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='13929 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [54] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' An, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Pospelov, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Pradler, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Ritz, “Direct Detection Constraints on Dark Photon Dark Matter,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' B 747 (2015) 331–338, arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='8378 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [55] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' An, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Pospelov, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Pradler, “Dark Matter Detectors as Dark Photon Helioscopes,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 111 (2013) 041302, arXiv:1304.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='3461 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [56] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Caputo, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Millar, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' O’Hare, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Vitagliano, “Dark photon limits: A handbook,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 104 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 9, (2021) 095029, arXiv:2105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='04565 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [57] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' O’Hare, “cajohare/axionlimits: Axionlimits.” https://cajohare.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='io/AxionLimits/, July, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [58] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Pshirkov and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Popov, “Conversion of Dark matter axions to photons in magnetospheres of neutron stars,” J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Exp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Theor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 108 (2009) 384–388, arXiv:0711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='1264 [astro-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [59] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Huang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Kadota, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Sekiguchi, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Tashiro, “Radio telescope search for the resonant conversion of cold dark matter axions from the magnetized astrophysical sources,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 97 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 12, (2018) 123001, arXiv:1803.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='08230 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [60] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Hook, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Kahn, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Safdi, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Sun, “Radio Signals from Axion Dark Matter Conversion in Neutron Star Magnetospheres,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 121 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 24, (2018) 241102, arXiv:1804.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='03145 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [61] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Safdi, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Sun, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Chen, “Detecting Axion Dark Matter with Radio Lines from Neutron Star Populations,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 99 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 12, (2019) 123021, arXiv:1811.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='01020 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='CO].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [62] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Fortin and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Sinha, “X-Ray Polarization Signals from Magnetars with Axion-Like-Particles,” JHEP 01 (2019) 163, arXiv:1807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='10773 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [63] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Fortin and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Sinha, “Constraining Axion-Like-Particles with Hard X-ray Emission from Magnetars,” JHEP 06 (2018) 048, arXiv:1804.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='01992 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [64] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Noordhuis, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Prabhu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Witte, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Chen, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Cruz, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Weniger, “Novel Constraints on Axions Produced in Pulsar Polar Cap Cascades,” arXiv:2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='09917 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [65] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Hong, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Shin, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Yun, “Cooling of young neutron stars and dark gauge bosons,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 103 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 12, (2021) 123031, arXiv:2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='05427 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [66] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Diamond and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Marques-Tavares, “γ-Ray Flashes from Dark Photons in Neutron Star Mergers,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 128 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 21, (2022) 211101, arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='03879 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [67] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lu and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Chiang, “Probing dark gauge boson with observations from neutron stars,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 105 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 12, (2022) 123017, arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='07692 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [68] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Hardy and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Song, “Listening for Dark Photon Radio from the Galactic Centre,” arXiv:2212.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='09756 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [69] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Bi, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Yao, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Yin, “Exploring axion dark matter through radio signals from magnetic white dwarf stars,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 103 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 11, (2021) 115021, arXiv:2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='02585 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [70] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Dessert, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Long, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Safdi, “X-ray Signatures of Axion Conversion in Magnetic White Dwarf Stars,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 123 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 6, (2019) 061104, arXiv:1903.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='05088 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [71] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Dessert, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Dunsky, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Safdi, “Upper limit on the axion-photon coupling from magnetic white dwarf polarization,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 105 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 10, (2022) 103034, arXiv:2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='04319 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [72] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Jaeckel, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Malta, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Redondo, “Decay photons from the axionlike particles burst of type II supernovae,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 98 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 5, (2018) 055032, arXiv:1702.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='02964 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [73] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Caputo, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Raffelt, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Vitagliano, “Muonic boson limits: Supernova redux,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 105 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 3, (2022) 035022, arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='03244 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [74] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' De Angelis, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Galanti, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Roncadelli, “Relevance of axion-like particles for very-high-energy astrophysics,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 84 (2011) 105030, arXiv:1106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='1132 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='HE].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [Erratum: Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='D 87, 109903 (2013)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [75] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Guo, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Bi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lin, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Yin, “Implications of axion-like particles from the Fermi-LAT and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' observations of PG 1553+113 and PKS 2155−304,” Chin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' C 45 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 2, (2021) 025105, arXiv:2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='07571 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='HE].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [76] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Guo, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Bi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lin, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Yin, “Limits on axion-like particles from Mrk 421 with 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='5-year period observations by ARGO-YBJ and Fermi-LAT,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 103 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 8, (2021) 083003, arXiv:2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='09464 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='HE].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [77] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Bi, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Yin, “Searching for axion-like particles with the blazar observations of MAGIC and Fermi-LAT *,” Chin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' C 46 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 8, 5 (2022) 085105, arXiv:2110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='13636 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='HE].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [78] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Davies, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Meyer, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Cotter, “Constraints on axionlike particles from a combined analysis of three flaring Fermi flat-spectrum radio quasars,” arXiv:2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='03414 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='HE].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [79] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Redondo and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Raffelt, “Solar constraints on hidden photons re-visited,” JCAP 1308 (2013) 034, arXiv:1305.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='2920 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [80] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Ayala, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Dom´ınguez, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Giannotti, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Mirizzi, and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Straniero, “Revisiting the bound on axion-photon coupling from Globular Clusters,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 113 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 19, (2014) 191302, arXiv:1406.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='6053 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='SR].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [81] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Dolan, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Hiskens, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Volkas, “Advancing globular cluster constraints on the axion-photon coupling,” JCAP 10 (2022) 096, arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='03102 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [82] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Vinyoles, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Serenelli, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Villante, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Basu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Redondo, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Isern, “New axion and hidden photon constraints from a solar data global fit,” JCAP 2015 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 10, (Oct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', 2015) 015–015, arXiv:1501.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='01639 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='SR].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [83] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' DeRocco, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Wegsman, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Grefenstette, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Huang, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Van Tilburg, “First Indirect Detection Constraints on Axions in the Solar Basin,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 129 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 10, (2022) 101101, arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='05700 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [84] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' An, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Huang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Liu, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Xue, “Radio-frequency Dark Photon Dark Matter across the Sun,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 126 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 18, (2021) 181102, arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='15836 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [85] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' van Haarlem et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', “LOFAR: The LOw-Frequency ARray,” Astron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Astrophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 556 (2013) A2, arXiv:1305.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='3550 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='IM].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [86] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Raffelt and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Stodolsky, “Mixing of the Photon with Low Mass Particles,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D37 (1988) 1237.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [87] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Dewdney, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Hall, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Schilizzi, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lazio, “The square kilometre array,” Proceedings of the IEEE 97 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 8, (2009) 1482–1496.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [88] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' De La Luz, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lara, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Mendoza, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Shimojo, “3D Simulations of the Quiet Sun Radio Emission at Millimeter and Submillimeter Wavelengths,” Geofisica Internacional 47 (July, 2008) 197–203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [89] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Solanki, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Inhester, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Sch¨ussler, “The solar magnetic field,” Reports on Progress in Physics 69 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 3, (2006) 563.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [90] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Aschwanden, “Chapter 11 - The Sun,” in Encyclopedia of the Solar System (Third Edition), T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Spohn, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Breuer, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Johnson, eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 235–259.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Elsevier, Boston, third edition ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='sciencedirect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='com/science/article/ pii/B9780124158450000116.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [91] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Yang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Bethge, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Tian, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Tomczyk, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Morton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Del Zanna, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' McIntosh, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Karak, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Gibson, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Samanta, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', “Global maps of the magnetic field in the solar corona,” Science 369 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 6504, (2020) 694–697.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [92] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' de Salas, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Malhan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Freese, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Hattori, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Valluri, “On the estimation of the Local Dark Matter Density using the rotation curve of the Milky Way,” JCAP 10 (2019) 037, arXiv:1906.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='06133 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='GA].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [93] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' de Salas and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Widmark, “Dark matter local density determination: recent observations and future prospects,” Rept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Prog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 84 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 10, (2021) 104901, arXiv:2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='11477 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='GA].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [94] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' McMillan and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Binney, “The uncertainty in Galactic parameters,” Mon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Roy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Astron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 402 (2010) 934, arXiv:0907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='4685 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='GA].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [95] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Bovy, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Hogg, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rix, “Galactic masers and the Milky Way circular velocity,” Astrophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 704 (2009) 1704–1709, arXiv:0907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='5423 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='GA].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [96] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Thejappa and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' MacDowall, “Effects of scattering on radio emission from the quiet sun at low frequencies,” The Astrophysical Journal 676 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 2, (Apr, 2008) 1338.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' https://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='1086/528835.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [97] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Kontar, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Chen, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Chrysaphi, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Jeffrey, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Emslie, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Krupar, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Maksimovic, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Gordovskyy, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Browning, “Anisotropic radio-wave scattering and the interpretation of solar radio emission observations,” The Astrophysical Journal 884 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 2, (Oct, 2019) 122.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [98] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Arzner and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Magun, “Radiowave propagation in a statistically inhomogeneous plasma,” Astronomy and Astrophysics 351 (Nov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', 1999) 1165–1189.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [99] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Bian, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Emslie, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Kontar, “A fokker–planck framework for studying the diffusion of radio burst waves in the solar corona,” The Astrophysical Journal 873 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 1, (Mar, 2019) 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' https://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='3847/1538-4357/ab0411.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [100] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Cowan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Cranmer, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Gross, and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Vitells, “Asymptotic formulae for likelihood-based tests of new physics,” Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' C 71 (2011) 1554, arXiv:1007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='1727 [physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='data-an].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [Erratum: Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='C 73, 2501 (2013)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [101] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' An, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Ge, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='-Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Guo, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Huang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Liu, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lu, “Direct detection of dark photon dark matter using radio telescopes,” arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='05767 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [102] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Godfrey et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', “Search for dark photon dark matter: Dark E field radio pilot experiment,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 104 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 1, (2021) 012013, arXiv:2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='02805 [physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='ins-det].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [103] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Hoang Nguyen, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lobanov, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Horns, “First results from the WISPDMX radio frequency cavity searches for hidden photon dark matter,” JCAP 1910 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 10, (2019) 014, arXiv:1907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='12449 [hep-ex].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [104] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Betz, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Caspers, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Gasior, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Thumm, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rieger, “First results of the CERN Resonant Weakly Interacting sub-eV Particle Search (CROWS),” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 88 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 7, (2013) 075014, arXiv:1310.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='8098 [physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='ins-det].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [105] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Ehret et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', “New ALPS Results on Hidden-Sector Lightweights,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' B 689 (2010) 149–155, arXiv:1004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='1313 [hep-ex].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [106] OSQAR Collaboration, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Ballou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', “New exclusion limits on scalar and pseudoscalar axionlike particles from light shining through a wall,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D 92 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 9, (2015) 092002, arXiv:1506.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='08082 [hep-ex].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [107] CAST Collaboration, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Anastassopoulos et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', “New CAST Limit on the Axion-Photon Interaction,” Nature Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 13 (2017) 584–590, arXiv:1705.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='02290 [hep-ex].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [108] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Crisosto, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Sikivie, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Sullivan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Tanner, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Yang, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rybka, “ADMX SLIC: Results from a 6 Superconducting LC Circuit Investigating Cold Axions,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 124 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 24, (2020) 241101, arXiv:1911.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='05772 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='CO].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [109] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Kaiser, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Kucera, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Davila, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' St Cyr, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Guhathakurta, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Christian, “The stereo mission: An introduction,” Space Science Reviews 136 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 1, (2008) 5–16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [110] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Pulupa, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Bale, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Bonnell, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Bowen, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Carruth, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Goetz, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Gordon, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Harvey, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Maksimovic, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Mart´ınez-Oliveros, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Moncuquet, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Saint-Hilaire, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Seitz, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Sundkvist, “The solar probe plus radio frequency spectrometer: Measurement requirements, analog design, and digital signal processing,” Journal of Geophysical Research: Space Physics 122 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 3, (3, 2017) 2836–2854.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' https://onlinelibrary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='wiley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' com/doi/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='1002/2016JA023345.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [111] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Shimwell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=', “The LOFAR Two-metre Sky Survey: I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Survey Description and Preliminary Data Release,” Astron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Astrophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 598 (2017) A104, arXiv:1611.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='02700 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='IM].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' [112] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Kontar, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Yu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Kuznetsov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Emslie, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Alcock, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Jeffrey, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Melnik, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Bian, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' Subramanian, “Imaging Spectroscopy of Solar Radio Burst Fine Structures,” Nature Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 8 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content=' 1, (2017) 1515, arXiv:1708.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='06505 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} +page_content='SR].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE2T4oBgHgl3EQfDQZP/content/2301.03622v1.pdf'} diff --git a/X9AyT4oBgHgl3EQfiPgb/content/tmp_files/2301.00390v1.pdf.txt b/X9AyT4oBgHgl3EQfiPgb/content/tmp_files/2301.00390v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..de51c62dbb4fe40c44f59c6c214682e55f5f9709 --- /dev/null +++ b/X9AyT4oBgHgl3EQfiPgb/content/tmp_files/2301.00390v1.pdf.txt @@ -0,0 +1,1649 @@ +Prepared for submission to JHEP +Impact of CP violation searches at MOMENT +experiment with sterile neutrinos +Kiran Sharmaa , Sudhanwa Patraa +aDepartment of Physics, Indian Institute of Technology Bhilai, Raipur-492015, India +E-mail: kirans@iitbhilai.ac.in, sudhanwa@iitbhilai.ac.in +Abstract: +We examine the scope of the MOMENT experiment in the context of CP +violation searches with the presence of extra eV scale sterile neutrino. +MOMENT is a +proposed short baseline neutrino oscillation experiment using muon beams for neutrinos +production, making it advantageous over π0 background and other technical difficulties. +We work over the first oscillation maxima which matches the peak value of flux with a run +time of 5 years for both neutrino and anti-neutrino modes. We perform the bi-probability +studies for both 3 and 3+1 flavor mixing schemes. The CP violation sensitivities arising +from the fundamental CP phase δ13 and unknown CP phase δ14 are explored at the firm +footing. The slight deteriorates are observed in CP violations induced by δ13 as the presence +of sterile neutrino is considered. We also look at the reconstruction of CP violations phases +δ13 and δ14 and the MOMENT experiment shows significant capabilities in the precise +measurement of δ13 phase. +Keywords: Neutrino Oscillation, Medium-baseline, Sterile Neutrino, MOMENT +arXiv:2301.00390v1 [hep-ph] 1 Jan 2023 + +Contents +1 +Introduction and Motivation +1 +2 +Transition probability in the 4-flavor scheme +3 +2.1 +Theoretical framework +3 +2.2 +Appearance probability P 4ν +µe +4 +3 +Discussion at probability level +6 +4 +Biprobability analysis +7 +5 +Experimental Details and Event Spectra +10 +6 +CP violation sensitivities in presence of sterile neutrino +12 +6.1 +Chi-square analysis +12 +6.2 +Reconstructing the CP phases +13 +7 +Conclusions and Outlook +15 +8 +APPENDIX +15 +A Detailed Description of Biprobability analysis. +15 +1 +Introduction and Motivation +Neutrino physics has made tremendous progress in the past few years. The discovery of +the phenomenon of neutrino oscillations [1–8] has not only answered the solar- neutrino +problem [9–12] and atmospheric-neutrino problem [13–16], but also it has open up the new +doors to look at the beyond standard model physics. At present, as per the standard model, +there are three active flavors of neutrinos associated with the electron, muon and, tau lepton. +The neutrino oscillations in the presence of three active neutrinos are described by six +oscillation parameters: two mass square differences i.e. ∆m2 +21 = m2 +2−m2 +1, ∆m2 +31 = m2 +3−m2 +1, +three neutrino mixing angles θ12, θ13, θ23 and one Dirac CP phase δCP . The values of the +oscillations parameters have been constrained by various neutrino experiments [17–24], the +only unknown oscillation parameters left to be known precisely are the sign of atmosphere +mass -squared difference i.e. ∆m2 +31 which will resolve the issue of neutrino mass ordering, +the precise measurement of mixing angle θ23 which will help in establishing the correct +octant and the value of fundamental CP phase. The long baseline experiments T2K [25] +and NOνA [26] have independently reported the value of δCP but there is a slight tension +between the two values. This discrepancy may be because of the systematic uncertainties or +– 1 – + +it may be an indication of the new physics arising from the presence of sterile neutrinos [27– +35] or non-standard interactions [36–40]. +Certain short baseline anomalies are coming from the reactor and Gallium [41, 42], +accelerator experiments LSND [43] and MiniBooNE [44] which hints towards the existence +of a hypothetical fourth sterile mass eigenstate. The presence of such right-handed sterile +neutrinos will extend the structure of the standard 3 flavor framework. As a result, the +number of neutrino oscillation parameters gets enhanced in the minimal extended 3 + 1 +framework. There are now three additional mixing angles i.e. θ14, θ24, θ34 and two more +CP phases. We have the new mass square differences arising from the mixing among fourth +mass eigen state ν4 with mass eigenstates of three active flavors (νe, νµ and, ντ). The +presence of sterile neutrino has been widely explored in the literature (see references [45– +51]). +The effect of sterile neutrinos over the standard oscillation parameters have been +widely explored in the current and future proposed neutrino oscillation experiments like +T2HK [52],T2HKK [53], and DUNE [54–57]. An important point to remark is that all the +above-stated experimental facilities are using the neutrino beams originating from the pions +decay, whereas in the proposed work, we look at the potential of upcoming neutrino oscil- +lation experiment MOMENT(MuOn-decay MEdium baseline NeuTrino beam facility) [58] +which observe a neutrino beam produced from decaying muons at relatively low energies. +As an advantage one can skip the background effects and other technical difficulties at the +experimental level. +MOMENT is a proposed short baseline neutrino oscillation experiment with a baseline +of 150 km. A detailed description of the experimental facility is provided in section 5. The +matter effects arising from the interaction of neutrinos with the matter potential as they +propagate through it are negligible in context with the long baseline neutrino experiments. +As a result, the fake CP violation scenarios arising from matter terms can be avoided in the +MOMENT. The comparative analysis for the precise measurement of delta CP phase has +been made among MOMENT, T2K, NOνA, T2HK, T2HKK, and DUNE where it has been +discussed that MOMENT can significantly improve the bounds over δCP [59]. Furthermore, +the effect of non-standard interactions and their effect over the CP phase has been looked +at in references( [59–62]). To the best of our knowledge, the effect of sterile neutrinos in +the MOMENT facility is not been studied to a much large extent. In this work, we look +at the influence of sterile neutrino over the precise measurement of the fundamental CP +phase. This work aims to study the capabilities of the MOMENT experiment and put it +into context in the global experimental effort in neutrino physics. +This motivates us to study the physics potential of MOMENT experiment in the pres- +ence of a hypothetical right-handed eV scale sterile neutrino. We present a detailed dis- +cussion of the behavior of the 4-flavor transition probabilities and understand the bird’s +eye view of CP trajectory curves in the standard 3-flavor scheme and extend it to our +framework of the 3+1 flavor scheme. We perform a prospective study addressing the CP +violation sensitivities and look at the degeneracies among different CP phases. +The paper is organized as follows: In the next section, we develop the theoretical +understanding of the neutrino and anti-neutrino oscillation probabilities in presence of a +– 2 – + +sterile neutrino. The numerical results are performed using a bi-probability detailed study +in section 2. The experimental details and the corresponding event spectrum are explored +in the next section. We focus on the CP violation study and the reconstruction among +different CP phases in section 6.1. The paper is concluded with a summary of prospective +studies perform in our work. +2 +Transition probability in the 4-flavor scheme +2.1 +Theoretical framework +The formalism of 3 + 1 neutrino oscillation can be understood in terms of time dependent +Schrödinger equation in the mass basis as, +i∂νj +∂t = H0νj, +(2.1) +with j = 1, 2, 3, 4 and H0 is defined as the effective Hamiltonian in the mass basis and νj +being the neutrino mass eigenstate. The effective flavor dependent Hamiltonian for 3 + 1 +neutrino oscillation including matter effects is given by +H4ν = U4ν +� +���� +0 +0 +0 +0 +0 ∆m2 +21/2E +0 +0 +0 +0 +∆m2 +31/2E +0 +0 +0 +0 +∆m2 +41/2E +� +���� U † +4ν +� +�� +� += Hvac ++ +� +���� +VCC 0 0 +0 +0 +0 0 +0 +0 +0 0 +0 +0 +0 0 −VNC +� +���� +� +�� +� += Hmat +, +(2.2) +In case of 3 + 1 scenario, i.e, for three active neutrinos and one sterile neutrino, the mixing +matrix U4ν can be parameterised as +U4ν = R +� +θ34, δ34 +� +R +� +θ24, 0 +� +R +� +θ14, δ14 +� +R +� +θ23, 0 +� +R +� +θ13, δ13 +� +R +� +θ12, 0 +� +≡ R +� +θ34, δ34 +� +R +� +θ24, 0 +� +R +� +θ14, δ14 +� +U3ν +(2.3) +where U3ν = R +� +θ23, 0 +� +R +� +θ13, δ13 +� +R +� +θ12, 0 +� +is the standard three flavor neutrino mixing +matrix. The mass eigenstates are related to flavor eigenstates as νj = +� +U4ν +� +jανα. +The 4 × 4 real rotation matrices Rij (here R(θ24), R(θ23), R(θ12)) in the (i, j) plane +with 2 × 2 sub matrices are defined as: +Rij = +� +cij +sij +−sij cij +� +(2.4) +while the complex rotation matrices (i.e. R(θ34, δ34), R(θ14, δ14, and R(θ13, δ13))in the +(i, j) plane are defined by: +˜ +Rij = +� +cij +sij +−s∗ +ije−iδij cij +� +(2.5) +– 3 – + +with cij = cos θij and sij = sin θij. The parametrization of unitary mixing matrix +allows the conversion probability to independent of mixing angle θ34 and the corresponding +delta phase δ34. +In the S-matrix formalism, the neutrino flavor changes after propagating a distance L +is defined in terms of an evolution matrix S as +να +� +L +� += Sαβνβ +� +0 +� +. +(2.6) +The key point is that the evolution matrix S satisfies the same Schrödinger equation. After +some simplification, the form of evolution matrix can be expressed in term H4ν as +Sβα = [ exp(−iH4νL)]βα , +(2.7) +The final expression of neutrino oscillation probability from να to νβ with neutrino energy +E and baseline L is expressed in terms of evolution matrix as, +P(να → νβ) = +�� Sβα +��2 +(2.8) +2.2 +Appearance probability P 4ν +µe +The 3 + 1 appearance probability for νµ → νe transition has been derived in ref.[63] and +is related to three flavour neutrino transition probability P 3ν +µe and other interference terms +arising from sterile neutrino mixing parameters as follows, +P 4ν +µe ≈ +� +1 − s2 +14 − s2 +24 +� +P 3ν +µe ++ 4s14s24s13s23 sin ∆31 sin(∆31 + δ13 − δ14), +− 4s14s24c23s12c12 sin(∆21) sin δ14, ++ 2s2 +14s2 +24 . +(2.9) +with ∆ij = +∆m2 +ijL +4E +. The expression for P 3ν +µe is sum of three contributions i.e, atmospheric +P ATM, solar P SOL and their interference term P INT +I +. Looking at the neutrino oscillation +parameters from recent global fit values displayed in Table.2.2 +s23 = 0.76 , c23 = 0.65 , +s12 = 0.56 , c12 = 0.83 , +s13 = 0.15 , c13 = 0.99 . +(2.10) +The sine of the reactor mixing angle s13 is treated as a small parameter in comparison to +other mixing angles and is taken to be of the order of O(ε) ≈ 0.15 while all other sines +and cosines are O(1). The other parameter which is the ratio between two mass-square +differences can be considered as |α| = +∆m2 +21 +|∆m2 +31| +≈ 0.03 ≃ ε2. Also, parameters having +sine of the sterile mixing angles sin θ14 and sin θ24 are considered to be small and are of the +order of ε. However, the other sterile neutrino mixing angle sin θ34 and the corresponding +phase δ34 are taken to be zero as the vacuum probability expression is independent of these +– 4 – + +Parameters +True Value +Marginalization Range +sin2 θ12 +0.318 +Not Marginalized +sin2 θ13 +0.022 +Not Marginalized +sin2 θ23 +0.574 +[0.38,0.64] +sin2 θ14 +0.02 +Not Marginalized +sin2 θ24 +0.02 +Not Marginalized +sin2 θ34 +0 +Not Marginalized +δ13 +[-180,180] +[-180,180] +δ14 +[-180,180] +Not Marginalized +δ34 +0 +Not Marginalized +∆m2 +21(eV 2) +7.50 × 10−5 +Not Marginalized +∆m2 +31(eV 2) +2.55 × 10−3 +[2.4, 2.7] × 10−3 +∆m2 +41(eV 2) +1 +Not Marginalized +Table 1. The various parameters and their true value used to simulate the data in GLoBES are +mentioned in columns 1 and 2. The third column depicts the range over which sin2θ23 , δ13 , and +∆m2 +31 are varied while minimizing the χ2 to obtain the final results. +contributions. Retaining term up to third order in ε the appearance probability P 4ν +µe can +be simplified as sum of three contributions, +P 4ν +µe ≃ P ATM + P INT +I ++ P INT +II +. +(2.11) +with individual contributions are given by +P ATM≃ 4s2 +23s2 +13 sin2 ∆ , +(2.12) +P INT +I +≃ 8s13s12c12s23c23(α∆) sin ∆ cos(∆ + δ13) , +(2.13) +P INT +II +≃ 4s14s24s13s23 sin ∆ sin(∆ + δ13 − δ14) , +(2.14) +where ∆ ≡ ∆31. +The first term coming from the atmospheric oscillating parameter ∆31 is a positive +quantity providing the leading order contribution to the transition probability. The sub- +leading contribution are arising from the interferences of different frequencies. The term +P INT +I +corresponds to the interference of solar and atmospheric frequencies while the term +P INT +II +is connected with the interference of atmospheric and sterile frequencies. One can +infer from the expression of P INT +I +and P INT +II +that the interference induced by the presence +of sterile neutrino is not proportional to ∆ while the solar-atmospheric interference term is +directly related with it. As a result, the numerical simulation carried out for the MOMENT +experiment working over the first oscillation maxima has a good performance. +– 5 – + +3 +Discussion at probability level +The impact of matter effects on the appearance probabilities is marginal as MOMENT +is a short baseline experiment with a baseline of 150km. Thus, the vacuum probabilities +expressions can be used effectively while neglecting the suppressed contributions from the +MSW effect. For illustration, let us consider the transition probability in the presence of +matter effects up to leading order as [63–66], +P ATM +m +≃ (1 + 2v)P ATM , +(3.1) +with the correction factor, +v = V +k ≡ 2V E +∆m2 +31 +, +with, +V = +√ +2GF Ne +(3.2) +For the MOMENT experiment with baseline 150km and considering first maxima peak +energy E ≈ 0.3GeV, the correction factor is estimated to be v = 0.048. The correction +factor for NOνA experiment is v ∼ 0.17 while taking the benchmark value of peak energy +E = 2 GeV and that is the reason why matter effects are important for long baseline studies. +We look at the variation of appearance probability channel (νµ → νe) and (νµ → νe) +as a function of energy for the fixed value of the fundamental CP phase. The value of δ13 is +kept fixed at 0◦ and −90◦ while the Dirac phase δ14 is allowed to vary over different values +as mentioned in the legends in fig 3. The value of the remaining oscillation parameters +has been kept fixed at the benchmark values mentioned in table 2.2. +The simulations +are performed over the constant value of matter density, 2.7 g/cc considered using the +Preliminary Reference Earth Model(PREM) [67]. We have taken the value of the sterile +mass square difference to be ∆m2 +41 = 1eV 2, as a result, the oscillation induced by the +sterile presence is averaged out by the finite energy resolution of the detector. We look +at the averaged-out behavior of the appearance probabilities. One can also recall that the +anti-neutrino appearance probability can be obtained from that of neutrinos by changing +the sigh of the matter potential and the fundamental and sterile CP phases i.e. +Pνα→νβ(V, δ13, δ14) = Pνα→νβ(−V, −δ13, −δ14) +(3.3) +As evident from eqn 3.1, the leading order contribution to the appearance probability +will be more for neutrinos with V > 0 than anti-neutrinos with V < 0. The solid black +line refers to the 3 flavor appearance probability while the colored lines correspond to the +different 3 + 1 flavor mixing scenario. +The blue line indicates the contribution arising +from δ14 = −90◦, orange for δ14 = 0◦, green for δ14 = 90◦ and magenta for δ14 = 180◦. +The left column of the figure represents the appearance probability for neutrinos while the +right column explains the behavior of anti-neutrino appearance probability. The curves are +peaked around the first oscillation maxima for MOMENT experiment i.e ≈ 0.3 GeV . One +can notice that the amplitude and shape of probability are strongly dependent on the value +of sterile CP phase δ14. There is a decrease in the amplitude for the anti-neutrino oscillation +probability as expected in accordance to eqn 3.1 and which will be further understood for +the event spectrum in section 5. Moreover, the plot also shows the mutual swapping for +curves as one makes a transition from neutrino to anti-neutrino probability. +– 6 – + +3ν +δ14=-90° +δ14=0° +δ14=90° +δ14=180° +0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 +0.00 +0.02 +0.04 +0.06 +0.08 +0.10 +E [GeV] +P(νμ→νe) +δ13=0° +0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 +0.00 +0.02 +0.04 +0.06 +0.08 +0.10 +E [GeV] +P(νμ→νe) +δ13=0° +0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 +0.00 +0.02 +0.04 +0.06 +0.08 +0.10 +E [GeV] +P(νμ→νe) +δ13=-90° +0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 +0.00 +0.02 +0.04 +0.06 +0.08 +0.10 +E [GeV] +P(νμ→νe) +δ13=-90° +Figure 1. Transition probabilities for neutrinos and anti-neutrinos after averaging over the fast +oscillations are plotted against energy(varying from 0.1 to 0.8 GeV) with matter density kept fixed +at 2.7 g/cc. The oscillation probability for 3 flavor is shown by black curve while the colored curve +represents the oscillation probability in 3+1 mixing scenario for four different values of δ14 i.e. +−90◦,0◦,90◦, and 180◦. The left column corresponds to the neutrino transition probabilities for two +different values of δ13 whereas the right one is for anti-neutrino transition probabilities. +4 +Biprobability analysis +The bi-probability curves as the name suggest are the parametric curves that allow us to +trace the bi-probability space spanned together by the neutrino and anti-neutrino oscillation +probabilities. The idea of bi-probability curves in the 3 flavor scheme was first introduced +in the work [68]. The oscillation probability in the 3 flavor scheme is defined as: +P = P0 + A(cos ∆ cos δ13 − sin ∆ sin δ13) +(4.1) +P = P0 + A(cos ∆ cos δ13 − sin ∆ sin δ13) +(4.2) +where A = 8s13s12c12s23c23(α∆31) sin ∆31. and P0 ≈ (1 + 2v)P ATM +Thus, the bi-probability curves display the effects coming from the fundamental CP phase(sin δ13 +term which is CP violating, cos δ13 term which is CP conserving), and the matter effects +arising from the presence of term A, all in a single diagram. Thus, bi-probability provides +us with a bird’s eye view important for the mass hierarchy and CP violation studies. Since +– 7 – + +the MOMENT is a short baseline facility, it is not worth performing the analysis for mass +hierarchy sensitivities. In this work, we fix ourselves to a normal hierarchy and look at +the CP trajectory diagrams for neutrino and anti-neutrino oscillation probabilities. The +fundamental CP phase δ13 is varied in the range from −π to π and biprobability space is +spanned. As the probabilities involve the periodic function sine and cosine, as a result, +the space spanned must form a closed trajectory as the δ13 is varied. We generalize the +bi-probability representation to the 3+1 flavor scheme where the value of δ14 is kept fixed +at different values while δ13 is varied over the entire range. +Under the adiabatic approximation, the equation 4.1 and 4.2, can be re-expressed as: +P = l cos δ13 + m sin δ13 + n +(4.3) +P = l cos δ13 − m sin δ13 + n +(4.4) +A deeper understanding of the CP-trajectory curves for 3 flavor analysis can be obtained +by eliminating the δ13 factor from above equations and under the assumption A = A +which holds true in vacuum, we obtain the equation followed by neutrino and anti-neutrino +probability as: +�P + P − 2n +2l +�2 ++ +�P − P +2m +�2 += 1 +(4.5) +This is the equation of an ellipse where the lengths of the major and the minor axes are +measures for the coefficients of sin δ13 and cos δ13, respectively, in the oscillation probability. +Further, the minor or major axis is always inclined at an angle of 45◦ as visible in bi- +probability plots. +The idea of bi-probability plots in 3 flavor scheme can be extended to 3 + 1 flavor +analysis where we have some additional sources for CP violation arising by sterile neutrino +presence as explored in +[69] and the references therein. The analytic expression for the +neutrino transition probabilities in case of 3+1 neutrino oscillation is recast into a compact +form as, +P(ν) ≡ P = P0 + A cos +� +∆ + δ13 +� ++ B sin +� +∆ − δ14 + δ13 +� +(4.6) +where the first term P0 = P ATM ≃ 4s2 +23s2 +13 sin2 ∆ is independent of phase factor while +the factors which are independent of phase contained in second and third term are, A = +8s13s12c12s23c23(α∆) sin ∆ and B = 4s14s24s13s23 sin ∆. After a few simplifications using +trigonometric relations, the transition probability given in eq.(A.1) modifies to +P = P0 + A′ cos δ13 + B′ sin δ13 +(4.7) +Similarly, the simplified antineutrino transition probability is given by +P = P 0 + A +′ cos δ13 − B +′ sin δ13 +(4.8) +where the coefficients A′, B′,A +′ and B +′ are defined as follows +A′ = A cos ∆ + B sin +� +∆ − δ14 +� +(4.9) +– 8 – + +B′ = −A sin ∆ + B cos +� +∆ − δ14 +� +(4.10) +A +′ = A cos ∆ + B sin +� +∆ + δ14 +� +(4.11) +B +′ = −A sin ∆ + B cos +� +∆ + δ14 +� +(4.12) +The detailed derivation for obtaining the CP-trajectory is mentioned in the appendix ?? +obtained by eliminating the δ13. The angle of inclination is obtained by comparing the +trajectory equation with the general equation of ellipse and is obtained as: +tan 2θ = (B2 − A2) cos 2∆ − 2AB sin 2∆ cos δ14 +2AB sin δ14 +(4.13) +It is clear from the expression that the orientation of the ellipse is strongly dependent +on the value of sterile CP phase δ14 and the parameter ∆. The following conclusions can +be marked depending on the values of these parameters: +1. Firstly, under the vanishing condition of factor sin δ14 (i.e. δ14 = nπ), the inclination +becomes θ = π/4. The orientation of the elliptical trajectory can be determined by +looking at the sign of numerator term. If the numerator is a positive definite quantity +the orientation is counter-clockwise while it is clockwise for negative numerator. +2. Also, if ∆ ≈ nπ/2, which is true for MOMENT experiment which works in the first +oscillation maxima, the inclination angle simplifies as +tan 2θ ≈ 0.366 +sin δ14 +(4.14) +Now there are again two possible inclinations for δ14 → (2n − 1)π/2, depending +upon the value of n (where n = 1, 2, 3...). For the positive sign in denominator, the +inclination of minor axis is ≈ 10◦ whereas for the negative sign the major axis is +inclined by ≈ −10◦. +The discussion can be easily confirmed from the bi-probability plots in fig 2. We have +shown the dependency of ellipse inclination for four different values of δ14. The parameter +δ13 is varied over the complete allowed range [−π, π] while δ14 is kept fixed for each particular +plot as mentioned in the legend. The solid black ellipse is drawn for the variation of neutrino +and anti-neutrino probabilities in the 3-flavor scheme. The centers of the ellipses in the +3+1 flavor scheme are almost coinciding with the centers in the 3 flavor scheme. The slight +variation is arising from the negligible matter effects. We have marked special symbols for +highlighting four different fundamental CP phase values i.e δ13 = −180◦, −90◦, 0◦, 90◦. For +δ14 = −180◦ and δ14 = 0◦, the inclination is 45◦ as seen by the orange and magenta curves. +There is a swapping among the values of δ13 phase pointing towards the tracing of elliptical +trajectory as expected. While the blue and green curves plotted under δ14 = −90◦, 90◦ are +inclined by −10◦ and 10◦ respectively. +– 9 – + +3ν +δ14=0° +0.03 +0.04 +0.05 +0.06 +0.07 +0.03 +0.04 +0.05 +0.06 +0.07 +P(νμ → νe) +P(νμ → νe) +(δ13 =-180°) +(δ13 =-90°) +(δ13 =0°) +(δ13 =90°) +3ν +δ14=180° +0.03 +0.04 +0.05 +0.06 +0.07 +0.03 +0.04 +0.05 +0.06 +0.07 +P(νμ → νe) +P(νμ → νe) +(δ13 =-180°) +(δ13 =-90°) +(δ13 =0°) +(δ13 =90°) +3ν +δ14=-90° +0.03 +0.04 +0.05 +0.06 +0.07 +0.03 +0.04 +0.05 +0.06 +0.07 +P(νμ → νe) +P(νμ → νe) +(δ13 =-180°) +(δ13 =-90°) +(δ13 =0°) +(δ13 =90°) +3ν +δ14=90° +0.03 +0.04 +0.05 +0.06 +0.07 +0.03 +0.04 +0.05 +0.06 +0.07 +P(νμ → νe) +P(νμ → νe) +(δ13 =-180°) +(δ13 =-90°) +(δ13 =0°) +(δ13 =90°) +Figure 2. Bi-probability plots under the normal hierarchy are shown for both 3 flavor and 3+1 +flavor mixing scheme. The CP trajectory diagram for the 3-flavor is drawn by black color while +the orange, magenta, blue and green closed curves corresponds to the fixed values of δ14 phase as +mentioned in each legend. The neutrino energy is kept fixed at its first oscillation peak value of 0.3 +GeV. The CP-phase δ13 is varied in the range [−π, π]. Four different values of δ13 phase (i.e. 0◦, +180◦, −90◦, and 90◦) are marked by different symbols for comparing the orientation of ellipses. +5 +Experimental Details and Event Spectra +In this section, we shed light on the specifications of the experimental setup MOMENT. +As the name suggests MOMENT is a muon-decay medium-baseline neutrino beam facility +that uses a continuous proton beam of energy, 15 MW. Since ideally, it is very difficult +for any solid target to withstand such a high power, so mercury jets are adopted as the +target. Neutrinos are produced from the muons in a pion decay channel. The µ+ decays +via channel µ+ → ¯νµνee+ while µ− decays as µ− → νµ¯νee−. +Thus, we have the four +neutrino flavors i.e. νe, νµ, ¯νe and ¯νµ. Hence, the MOMENT setup would allow the study +of eight oscillation channels (i.e. νe → νe, νe → νµ, νµ → νe, νe → νµ as well as their +corresponding CP-conjugate partners). To provide sensitivity towards CP-violating, the +distinction between the neutrinos and anti-neutrinos is achieved by 500 kilo ton fiducial +Gd-doped Water Cherenkov detector as the baseline detector. The final neutrons captured +on Gd provides us a way to distinguish neutrinos from anti-neutrinos with the known +interactions of neutrinos with nucleons as νe+n → p+e−, νµ+n → p+µ−, ¯νe+p → n+e+, +¯νµ + p → n + µ+. The number of the proton on target (POT) with a proton beam power +– 10 – + +Characteristics +MOMENT +Beam power +15MW +Fiducial Detector mass +500 kton Gd doped Water Cherenkov +Baseline +150 km +Flux peaks at +0.3 GeV +First Oscillation maxima +≈ 0.3 GeV +Table 2. The experimental specifications of MOMENT experiment. +of 15 MW and with a proton energy of 1.5 GeV can be calculated as: +x = W × y × t × 1016 +1.6 × Ep +(5.1) +where W is beam power and Ep is the proton energy. The operational time in one year is +represented by t and is roughly ≈ 107 seconds, while y is number of years protons deposited +on the target. For MOMENT experiment, POT will be 6.2×1022. The fluxes for neutrinos +and anti-neutrinos arising from 1.5 GeV peaks around 0.030 GeV while the total energy +is varied in range from 0.010 − 0.80 GeV. The simulations are performed by assuming the +equal run rate for neutrino and anti-neutrino mode. We used an uncertainty 2.5% on the +signal and 5% uncertainty over the background for both neutrinos and anti-neutrino modes. +δ14=-90° +δ14=0° +δ14=90° +δ14=180° +3ν +0.1 +0.2 +0.3 +0.4 +0.5 +0.6 +0 +10 +20 +30 +40 +50 +60 +Reconstructed Energy(GeV) +νe Appearance Events +δ14=-90° +δ14=0° +δ14=90° +δ14=180° +3ν +0.1 +0.2 +0.3 +0.4 +0.5 +0.6 +0 +5 +10 +15 +20 +25 +30 +Reconstructed Energy(GeV) +νe Appearance Events +Figure 3. The expected number of signal events are plotted against the reconstructed neutrino +energy. The black curve refers to 3 flavor case while the colored histograms are for (3+1) scheme +with different values of δ14 as mentioned in the legend. The left panel corresponds to νe appearance +events while the right one is for νe appearance events. +From the existing literature, we know that the number of events in the i-th energy bin +are given by +Ni = Tnnϵ +4πL2 +� Emax +0 +dE +� Emax +Ai +Emin +Ai +dEAφ(E)σνe(E)R(E, EA)Pµe(E) +(5.2) +where T is the total running time, nn is the number of target nucleons in the detector, ϵ +is the detector efficiency, φ(E) is the neutrino flux, σνe is the neutrino interaction cross- +section and R(E,EA) is the Gauβian energy resolution function of detector. The quantities +– 11 – + +E and EA are the true and reconstructed (anti-)neutrino energies respectively and L is the +baseline. +The numerical results have been performed using the GLoBES software [70, 71] and +its additional toolkit required to incorporate the new physics arising from the presence of +sterile neutrino. We plot number of events against the reconstructed energy. It is found +that the maximum flux peaks about 0.3 GeV, so we have maximum number of events in +that energy bin. In the plot, the the thick black lines depicts the 3 flavor scenario. The +other colored histograms are drawn in the 3+1 scenario for the different values of δ14. The +number of νe appearance events are almost double of the number of events of νe events +since the cross section of neutrino interaction is very much different from the anti-neutrino +interaction cross-section. +6 +CP violation sensitivities in presence of sterile neutrino +6.1 +Chi-square analysis +The sensitivity of an experiment is to determine the precise values of the oscillation param- +eters by performing the χ2 square analysis. It is performed by comparing the simulated true +event rates from the present best fit data with the events generated by the test hypothesis. +Also, the theoretical uncertainties and the systematic errors at the experimental level are +incorporated using the method of pulls in calculating χ2 as [72, 73], +χ2(ptrue, ptest) = min +ξ +� +� +i ∈bins +(Ntrue +i +− Ntest +i +(ξ))2 +Ntrue +i ++ ξ2 +σ2 +ξ +� +(6.1) +where the nuisance parameter is denoted by ξ and the corresponding systematic error +is presented by σξ. The terms involving the nuisance parameters are called pull terms. +In order to counter the effect of systematic errors, the penalty term ξ2 is added. +The +nuisance parameters are dependent on the fiducial mass of the detector used in a particular +experiment as well as on the other experimental properties like flux normalization and cross +section. The minimisation of χ2 is obtained by marginalizing the oscillation parameter +space. Therefore, one add another penalty terms called priors to χ2. The mathematical +expression for the prior is given by +χ2 +prior = +�Ntrue − Ntest +σtrue +�2 +(6.2) +In our analysis, we looked at the sensitivity of MOMENT experiment to determine the +precise value of CP phase in the presence of sterile neutrino. In the three flavor scenario +as seen by eq. (4.1), the CP violations are induced by the presence of sin δ13 term. The +χ2 analysis is carried out with the statistical significance at which we can reject no CP +violation test hypothesis as, +χ2 = (N(δtr +CP) − N(δtest +CP = 0, 180))2 +N(δtr +CP) +(6.3) +– 12 – + +We have fixed the mixing angles θ12, θ13 and θ14 to their best fit values as mentioned +in table 2.2 in both true and test spectrum. +We look at the marginalization over the +parameters θ23 and the mass-squared difference ∆m2 +31 in two different ways as follows +1. First Case: ∆m2 +31 is kept fixed for both 3 flavor and 3+1 flavor scheme while θ23 is +marginalized over the range mentioned in table 2.2. +2. Second Case: Both ∆m2 +31 and θ23 are marginalized in 3 and 3+1 flavor scheme. Also +the projected information on ∆m2 +31 is added in the form of priors as +χ2 +prior = +�∆m2 +31(true) − ∆m2 +31(test) +σ(∆m2 +31) +�2 +(6.4) +where σ(∆m2 +31) is the 1σ error on ∆m2 +31. +The numerical results are displayed in figure 4. The left-panel of the figure corresponds +to the fixed value of ∆m2 +31 and varying θ23 while right-panel of the figure corresponds to the +second case of marginalizing over both the mass hierarchy ∆m2 +31 and θ23. The solid black +solid line represents the estimated value of ∆χ2 for three flavor scenario while all other +color lines depict the analysis for 3+1 flavor scenario. While performing the χ2 analysis +for discovery potential of CP phases we have considered equal neutrino and anti-neutrino +mode for a run time of (5+5) years. In each case we have considered four different values +of true δ14 phase as considered in probability analysis while its test value is marginalized. +The figure clearly depicts that CP sensitivities decreases in the presence of sterile neutrino, +primarily because of the degeneracies between the fundamental and active-sterile CP phase. +3ν +δ14=-90° +δ14=0° +δ14=90° +δ14=180° +-150 +-100 +-50 +0 +50 +100 +150 +0 +10 +20 +30 +40 +50 +60 +δ13(true) +Δχ2 +7σ +-150 +-100 +-50 +0 +50 +100 +150 +0 +10 +20 +30 +40 +50 +60 +δ13(true) +Δχ2 +Figure 4. The figure indicated the potential of MOMENT experiment for the discovery of CP +violation induced by the fundamental phase δ13. The black curve shows the behavior under 3 flavor +scheme while the colored curves in each panel are of different values of δ14. The left figure ∆m2 +31 +is kept fixed for both 3 and 3+1 flavor mixing while we marginalize over θ23 value whereas in the +right figure, we marginalised over both hierarchy ∆m2 +31 and θ23. +6.2 +Reconstructing the CP phases +In the last subsection, we looked at the CP-violation discovery potential of the MOMENT +experiment by performing χ2 analysis. But there is an another way to extract the comple- +– 13 – + +mentary information by reconstructing the values of two CP phases δ13 and δ14, indepen- +dent of the amount of CP violation. We look at the contour plots by reconstructing the CP +phases in δ13 − δ14 plane. The test values of both the CP phases are varied from (−π, π) +and contours are shown at one, two, and three sigma level, simultaneously. The four plots +represents the regions reconstructed for the four benchmark values considered in figure 5. +The solid black dot in the upper row represents CP-conserving scenarios with values (0,0), +(π, π) while the bottom panel has CP-violating picture with (−π/2, −π/2), (π/2, π/2). Our +results are predicts remarkable sensitivity for determining the precise value of the CP phase +δ13 in the presence of sterile neutrino. +-150 +-100 +-50 +0 +50 +100 +150 +-150 +-100 +-50 +0 +50 +100 +150 +δ13(test) [Degrees] +δ14(test) [Degrees] +1σ +2σ +3σ +(0,0) +-150 +-100 +-50 +0 +50 +100 +150 +-150 +-100 +-50 +0 +50 +100 +150 +δ13(test) [Degrees] +δ14(test) [Degrees] +(π,π) +-150 +-100 +-50 +0 +50 +100 +150 +-150 +-100 +-50 +0 +50 +100 +150 +δ13(test) [Degrees] +δ14(test) [Degrees] +(-π/2,-π/2) +-150 +-100 +-50 +0 +50 +100 +150 +-150 +-100 +-50 +0 +50 +100 +150 +δ13(test) [Degrees] +δ14(test) [Degrees] +(π/2,π/2) +Figure 5. +Contour plots in the plane of δ13 (test) and δ14 (test) for four different values of CP +phases δ13 and δ14. The discovery potential of δ13 in the contour plot of δ13 vs δ14 plane has been +presented by blue, red, and green band with 1σ, 2σ, and 3σ confidence level respectively. The +left(right) panel of first row corresponds to (0,0) and (π, π) respectively while the left(right) panel +of second row corresponds to (−π/2, −π/2) and (π/2, π/2) +– 14 – + +—————————– +7 +Conclusions and Outlook +In the work, we have addressed the potential of MOMENT experiment to emphasis the +role of sterile neutrino presence over the standard three flavor oscillation parameters. We +have shown the transition probabilities for both neutrinos and anti-neutrinos in 3+1 flavor +scheme and looked at the space spanned by the CP trajectory curves. The performance of +MOMENT in understanding the CP violation sensitivities induced by the fundamental CP +phase δ13 and new CP phase arising from active-sterile mixing has been explored. We found +that loss of CP sensitivity is dependent on the different values of δ14 phase. The discovery +potential of CP violation in 3 flavor scheme is quite significiant at 7 σ level though it gets +reduced in the presence of unknown CP phase δ14. The reduction might has arose from the +degeneracies among the two CP phases. We have also assessed the capability of MOMENT +experiment in reconstructing the true values of such CP-phases. +Acknowledgments +Kiran Sharma would like to thank Ministry of Education for the financial support for +carrying out this research work. KS is very thankful to Dr. Sabya Sachi Chatterjee for the +fruitful discussion carried time to time for the betterment of this work. +8 +APPENDIX +A +Detailed Description of Biprobability analysis. +The analytic expression for the neutrino transition probabilities in case of 3 + 1 neutrino +oscillation are recasted into a compact form as, +P(ν) ≡ P = P0 + A cos +� +∆ + δ13 +� ++ B sin +� +∆ − δ14 + δ13 +� +(A.1) +where the first term P0 = P ATM ≃ 4s2 +23s2 +13 sin2 ∆ is independent of phase factor while +the factors which are independent of phase contained in second and third term are, A = +8s13s12c12s23c23(α∆) sin ∆ and B = 4s14s24s13s23 sin ∆. +After few simplification using +trigonometric relations, the transition probability given in eq.(A.1) modifies to +P = P0 + A′ cos δ13 + B′ sin δ13 +(A.2) +Similarly, the simplified antineutrino transition probability is given by +P = P 0 + A +′ cos δ13 − B +′ sin δ13 +(A.3) +where the coefficients A′, B′,A +′ and B +′ are defined as follows +A′ = A cos ∆ + B sin +� +∆ − δ14 +� +(A.4) +B′ = −A sin ∆ + B cos +� +∆ − δ14 +� +(A.5) +– 15 – + +A +′ = A cos ∆ + B sin +� +∆ + δ14 +� +(A.6) +B +′ = −A sin ∆ + B cos +� +∆ + δ14 +� +(A.7) +The factors A +′ and B +′ are obtained from known relations of A′ and B′ by replac- +ing δ14 → −δ14. Eliminating δ13 from the modified neutrino and antineutrino transition +probabilities, we obtain, +1 +� +A′ +B′ + A′ +B′ +�2 +�P − P0 +B′ ++ P − P 0 +B +′ +�2 ++ +1 +� +B′ +A′ + B′ +A′ +�2 +�P − P0 +A′ +− P − P 0 +A +′ +�2 += 1 +(A.8) +� +1 +B′2 +� +A′ +B′ + A′ +B′ +�2 + +1 +A′2 +� +B′ +A′ + B′ +A′ +�2 +� +P 2 + +� +1 +B +′2� +A′ +B′ + A′ +B′ +�2 + +1 +A +′2� +B′ +A′ + B′ +A′ +�2 +� +P +2 + +� +2 +B′B +′ +� +A′ +B′ + A′ +B′ +�2 +− +2 +A′A +′ +� +B′ +A′ + B′ +A′ +�2 +� +PP +′ + +� +−2P0 +B′2 +� +A′ +B′ + A′ +B′ +�2 + +−2P0 +A′2 +� +B′ +A′ + B′ +A′ +�2 + +−2P 0 +B′B +′ +� +A′ +B′ + A′ +B′ +�2 + +−2P 0 +A′A +′ +� +B′ +A′ + B′ +A′ +�2 +� +P ++ +� +−2P 0 +B +′2� +A′ +B′ + A′ +B′ +�2 + +−2P 0 +A +′2� +B′ +A′ + B′ +A′ +�2 + +−2P0 +B′B +′ +� +A′ +B′ + A′ +B′ +�2 + +−2P0 +A′A +′ +� +B′ +A′ + B′ +A′ +�2 +� +P + +�� +1 +B′2 +� +A′ +B′ + A′ +B′ +�2 ++ ++ +1 +A′2 +� +B′ +A′ + B′ +A′ +�2 +� +P 2 +0 + +� +1 +B +′2� +A′ +B′ + A′ +B′ +�2 + +1 +A +′2� +B′ +A′ + B′ +A′ +�2 +� +P0 +2 + +� +2 +B′B +′ +� +A′ +B′ + A′ +B′ +�2 +− +2 +A′A +′ +� +B′ +A′ + B′ +A′ +�2 +� +P0P0 − 1 +� += 0 +(A.9) +Comparing eqn A.9 with the general quadratic curve representing the equation of ellipse +ax2 + bxy + cy2 + cx + ey + f = 0 +(A.10) +where the counterclockwise angle of rotation from the x-axis to the major axis of the ellipse +is defined by tan 2θ = +b +a−c, we can understand the inclination of ellipse in the biprobability +plots. +a = +1 +B′2 +� +A′ +B′ + A′ +B′ +�2 + +1 +A′2 +� +B′ +A′ + B′ +A′ +�2 +(A.11) +– 16 – + +b = +2 +B′B +′ +� +A′ +B′ + A′ +B′ +�2 − +2 +A′A +′ +� +B′ +A′ + B′ +A′ +�2 +(A.12) +c = +1 +B +′2� +A′ +B′ + A′ +B′ +�2 + +1 +A +′2� +B′ +A′ + B′ +A′ +�2 +(A.13) +Simplifying the results under the assumption that matter effects induce negligible per- +turbations on the interference terms(i.e. A = A, B = B) and using equations A.4, A.5, +A.6 and A.7, the angle of inclination becomes +tan 2θ = (B2 − A2) cos 2∆ − 2AB sin 2∆ cos δ14 +2AB sin δ14 +(A.14) +References +[1] L. Wolfenstein, Neutrino Oscillations in Matter, Phys. Rev. D 17 (1978) 2369–2374. +[2] S. M. Bilenky, C. Giunti, and W. Grimus, Phenomenology of neutrino oscillations, Prog. +Part. Nucl. Phys. 43 (1999) 1–86, [hep-ph/9812360]. +[3] C. Giunti, Neutrino Flavor States and Oscillations, J. Phys. G 34 (2007) R93–R109, +[hep-ph/0608070]. +[4] H. Nunokawa, S. J. Parke, and J. W. F. Valle, CP Violation and Neutrino Oscillations, Prog. +Part. Nucl. Phys. 60 (2008) 338–402, [arXiv:0710.0554]. +[5] T. Schwetz, M. A. Tortola, and J. W. F. Valle, Three-flavour neutrino oscillation update, +New J. Phys. 10 (2008) 113011, [arXiv:0808.2016]. +[6] S. Pascoli and T. Schwetz, Prospects for neutrino oscillation physics, Adv. High Energy Phys. +2013 (2013) 503401. +[7] M. Blennow and A. Y. Smirnov, Neutrino propagation in matter, Adv. High Energy Phys. +2013 (2013) 972485, [arXiv:1306.2903]. +[8] G. Bellini, L. Ludhova, G. Ranucci, and F. L. Villante, Neutrino oscillations, Adv. High +Energy Phys. 2014 (2014) 191960, [arXiv:1310.7858]. +[9] S. P. Mikheyev and A. Y. Smirnov, Resonance Amplification of Oscillations in Matter and +Spectroscopy of Solar Neutrinos, Sov. J. Nucl. Phys. 42 (1985) 913–917. +[10] H. A. Bethe, A Possible Explanation of the Solar Neutrino Puzzle, Phys. Rev. Lett. 56 (1986) +1305. +[11] W. C. Haxton, R. G. Hamish Robertson, and A. M. Serenelli, Solar Neutrinos: Status and +Prospects, Ann. Rev. Astron. Astrophys. 51 (2013) 21–61, [arXiv:1208.5723]. +[12] M. Maltoni and A. Y. Smirnov, Solar neutrinos and neutrino physics, Eur. Phys. J. A 52 +(2016), no. 4 87, [arXiv:1507.05287]. +[13] G. L. Fogli, E. Lisi, A. Marrone, and D. Montanino, Status of atmospheric nu(mu) —> +nu(tau) oscillations and decoherence after the first K2K spectral data, Phys. Rev. D 67 +(2003) 093006, [hep-ph/0303064]. +– 17 – + +[14] M. C. Gonzalez-Garcia and M. Maltoni, Atmospheric neutrino oscillations and new physics, +Phys. Rev. D 70 (2004) 033010, [hep-ph/0404085]. +[15] T. Kajita, Discovery of Atmospheric Neutrino Oscillations, Int. J. Mod. Phys. A 31 (2016), +no. 27 1630047. +[16] T. Kajita, Nobel Lecture: Discovery of atmospheric neutrino oscillations, Rev. Mod. Phys. 88 +(2016), no. 3 030501. +[17] Daya Bay Collaboration, F. P. An et al., New Measurement of Antineutrino Oscillation +with the Full Detector Configuration at Daya Bay, Phys. Rev. Lett. 115 (2015), no. 11 +111802, [arXiv:1505.03456]. +[18] RENO Collaboration, J. H. Choi et al., Observation of Energy and Baseline Dependent +Reactor Antineutrino Disappearance in the RENO Experiment, Phys. Rev. Lett. 116 (2016), +no. 21 211801, [arXiv:1511.05849]. +[19] Double Chooz Collaboration, Y. Abe et al., Improved measurements of the neutrino mixing +angle θ13 with the Double Chooz detector, JHEP 10 (2014) 086, [arXiv:1406.7763]. +[Erratum: JHEP 02, 074 (2015)]. +[20] Daya Bay Collaboration, F. P. An et al., Spectral measurement of electron antineutrino +oscillation amplitude and frequency at Daya Bay, Phys. Rev. Lett. 112 (2014) 061801, +[arXiv:1310.6732]. +[21] H. Nunokawa, S. J. Parke, and R. Zukanovich Funchal, Another possible way to determine +the neutrino mass hierarchy, Phys. Rev. D 72 (2005) 013009, [hep-ph/0503283]. +[22] A. de Gouvea, J. Jenkins, and B. Kayser, Neutrino mass hierarchy, vacuum oscillations, and +vanishing |U(e3)|, Phys. Rev. D 71 (2005) 113009, [hep-ph/0503079]. +[23] MINOS Collaboration, P. Adamson et al., Measurement of Neutrino and Antineutrino +Oscillations Using Beam and Atmospheric Data in MINOS, Phys. Rev. Lett. 110 (2013), +no. 25 251801, [arXiv:1304.6335]. +[24] G. L. Fogli and E. Lisi, Tests of three flavor mixing in long baseline neutrino oscillation +experiments, Phys. Rev. D 54 (1996) 3667–3670, [hep-ph/9604415]. +[25] T2K Collaboration, K. Abe et al., Constraint on the matter–antimatter symmetry-violating +phase in neutrino oscillations, Nature 580 (2020), no. 7803 339–344, [arXiv:1910.03887]. +[Erratum: Nature 583, E16 (2020)]. +[26] NOvA Collaboration, M. A. Acero et al., Improved measurement of neutrino oscillation +parameters by the NOvA experiment, Phys. Rev. D 106 (2022), no. 3 032004, +[arXiv:2108.08219]. +[27] K. N. Abazajian et al., Light Sterile Neutrinos: A White Paper, arXiv:1204.5379. +[28] A. Palazzo, Phenomenology of light sterile neutrinos: a brief review, Mod. Phys. Lett. A 28 +(2013) 1330004, [arXiv:1302.1102]. +[29] S. Gariazzo, C. Giunti, M. Laveder, Y. F. Li, and E. M. Zavanin, Light sterile neutrinos, J. +Phys. G 43 (2016) 033001, [arXiv:1507.08204]. +[30] C. Giunti, Phenomenology of light sterile neutrinos, Mod. Phys. Lett. A 30 (2015), no. 20 +1530015. +[31] C. Giunti, Light Sterile Neutrinos: Status and Perspectives, Nucl. Phys. B 908 (2016) +336–353, [arXiv:1512.04758]. +– 18 – + +[32] C. Giunti and T. Lasserre, eV-scale Sterile Neutrinos, Ann. Rev. Nucl. Part. Sci. 69 (2019) +163–190, [arXiv:1901.08330]. +[33] S. Böser, C. Buck, C. Giunti, J. Lesgourgues, L. Ludhova, S. Mertens, A. Schukraft, and +M. Wurm, Status of Light Sterile Neutrino Searches, Prog. Part. Nucl. Phys. 111 (2020) +103736, [arXiv:1906.01739]. +[34] J. Kopp, P. A. N. Machado, M. Maltoni, and T. Schwetz, Sterile Neutrino Oscillations: The +Global Picture, JHEP 05 (2013) 050, [arXiv:1303.3011]. +[35] K. Sharma and S. Patra, Study of matter effects in the presence of sterile neutrino using +OMSD approximation, arXiv:2207.03249. +[36] T. Ohlsson, Status of non-standard neutrino interactions, Rept. Prog. Phys. 76 (2013) +044201, [arXiv:1209.2710]. +[37] Y. Farzan and M. Tortola, Neutrino oscillations and Non-Standard Interactions, Front. in +Phys. 6 (2018) 10, [arXiv:1710.09360]. +[38] A. Falkowski, M. González-Alonso, and Z. Tabrizi, Consistent QFT description of +non-standard neutrino interactions, JHEP 11 (2020) 048, [arXiv:1910.02971]. +[39] I. Bischer and W. Rodejohann, General neutrino interactions from an effective field theory +perspective, Nucl. Phys. B 947 (2019) 114746, [arXiv:1905.08699]. +[40] S.-F. Ge and S. J. Parke, Scalar Nonstandard Interactions in Neutrino Oscillation, Phys. +Rev. Lett. 122 (2019), no. 21 211801, [arXiv:1812.08376]. +[41] G. Mention, M. Fechner, T. Lasserre, T. A. Mueller, D. Lhuillier, M. Cribier, and +A. Letourneau, The Reactor Antineutrino Anomaly, Phys. Rev. D 83 (2011) 073006, +[arXiv:1101.2755]. +[42] GALLEX Collaboration, W. Hampel et al., Final results of the Cr-51 neutrino source +experiments in GALLEX, Phys. Lett. B 420 (1998) 114–126. +[43] LSND Collaboration, A. Aguilar-Arevalo et al., Evidence for neutrino oscillations from the +observation of ¯νe appearance in a ¯νµ beam, Phys. Rev. D 64 (2001) 112007, +[hep-ex/0104049]. +[44] MiniBooNE Collaboration, A. A. Aguilar-Arevalo et al., Significant Excess of ElectronLike +Events in the MiniBooNE Short-Baseline Neutrino Experiment, Phys. Rev. Lett. 121 (2018), +no. 22 221801, [arXiv:1805.12028]. +[45] S. K. Agarwalla, S. S. Chatterjee, and A. Palazzo, Signatures of a Light Sterile Neutrino in +T2HK, JHEP 04 (2018) 091, [arXiv:1801.04855]. +[46] S. Choubey, D. Dutta, and D. Pramanik, Imprints of a light Sterile Neutrino at DUNE, +T2HK and T2HKK, Phys. Rev. D 96 (2017), no. 5 056026, [arXiv:1704.07269]. +[47] S. Choubey, D. Dutta, and D. Pramanik, Measuring the Sterile Neutrino CP Phase at +DUNE and T2HK, Eur. Phys. J. C 78 (2018), no. 4 339, [arXiv:1711.07464]. +[48] N. Haba, Y. Mimura, and T. Yamada, θ23 octant measurement in 3 + 1 neutrino oscillations +in T2HKK, Phys. Rev. D 101 (2020), no. 7 075034, [arXiv:1812.10940]. +[49] P. Coloma, D. V. Forero, and S. J. Parke, DUNE Sensitivities to the Mixing between Sterile +and Tau Neutrinos, JHEP 07 (2018) 079, [arXiv:1707.05348]. +[50] S. K. Agarwalla, S. S. Chatterjee, and A. Palazzo, Physics Reach of DUNE with a Light +Sterile Neutrino, JHEP 09 (2016) 016, [arXiv:1603.03759]. +– 19 – + +[51] J. M. Berryman, A. de Gouvêa, K. J. Kelly, and A. Kobach, Sterile neutrino at the Deep +Underground Neutrino Experiment, Phys. Rev. D 92 (2015), no. 7 073012, +[arXiv:1507.03986]. +[52] Hyper-Kamiokande Proto- Collaboration, K. Abe et al., Physics potential of a +long-baseline neutrino oscillation experiment using a J-PARC neutrino beam and +Hyper-Kamiokande, PTEP 2015 (2015) 053C02, [arXiv:1502.05199]. +[53] Hyper-Kamiokande Collaboration, K. Abe et al., Physics potentials with the second +Hyper-Kamiokande detector in Korea, PTEP 2018 (2018), no. 6 063C01, +[arXiv:1611.06118]. +[54] DUNE Collaboration, R. Acciarri et al., Long-Baseline Neutrino Facility (LBNF) and Deep +Underground Neutrino Experiment (DUNE): Conceptual Design Report, Volume 2: The +Physics Program for DUNE at LBNF, arXiv:1512.06148. +[55] DUNE Collaboration, R. Acciarri et al., Long-Baseline Neutrino Facility (LBNF) and Deep +Underground Neutrino Experiment (DUNE): Conceptual Design Report, Volume 1: The +LBNF and DUNE Projects, arXiv:1601.05471. +[56] DUNE Collaboration, J. Strait et al., Long-Baseline Neutrino Facility (LBNF) and Deep +Underground Neutrino Experiment (DUNE): Conceptual Design Report, Volume 3: +Long-Baseline Neutrino Facility for DUNE June 24, 2015, arXiv:1601.05823. +[57] DUNE Collaboration, R. Acciarri et al., Long-Baseline Neutrino Facility (LBNF) and Deep +Underground Neutrino Experiment (DUNE): Conceptual Design Report, Volume 4 The +DUNE Detectors at LBNF, arXiv:1601.02984. +[58] J. Cao et al., Muon-decay medium-baseline neutrino beam facility, Phys. Rev. ST Accel. +Beams 17 (2014) 090101, [arXiv:1401.8125]. +[59] J. Tang, S. Vihonen, and T.-C. Wang, Precision measurements on δCP in MOMENT, JHEP +12 (2019) 130, [arXiv:1909.01548]. +[60] M. Blennow, P. Coloma, and E. Fernández-Martinez, The MOMENT to search for CP +violation, JHEP 03 (2016) 197, [arXiv:1511.02859]. +[61] P. Bakhti and Y. Farzan, CP-Violation and Non-Standard Interactions at the MOMENT, +JHEP 07 (2016) 109, [arXiv:1602.07099]. +[62] J. Tang and Y. Zhang, Study of nonstandard charged-current interactions at the MOMENT +experiment, Phys. Rev. D 97 (2018), no. 3 035018, [arXiv:1705.09500]. +[63] N. Klop and A. Palazzo, Imprints of CP violation induced by sterile neutrinos in T2K data, +Phys. Rev. D 91 (2015), no. 7 073017, [arXiv:1412.7524]. +[64] A. Cervera, A. Donini, M. B. Gavela, J. J. Gomez Cadenas, P. Hernandez, O. Mena, and +S. Rigolin, Golden measurements at a neutrino factory, Nucl. Phys. B 579 (2000) 17–55, +[hep-ph/0002108]. [Erratum: Nucl.Phys.B 593, 731–732 (2001)]. +[65] K. Asano and H. Minakata, Large-Theta(13) Perturbation Theory of Neutrino Oscillation for +Long-Baseline Experiments, JHEP 06 (2011) 022, [arXiv:1103.4387]. +[66] S. K. Agarwalla, Y. Kao, and T. Takeuchi, Analytical approximation of the neutrino +oscillation matter effects at large θ13, JHEP 04 (2014) 047, [arXiv:1302.6773]. +[67] A. M. Dziewonski and D. L. Anderson, Preliminary reference earth model, Phys. Earth +Planet. Interiors 25 (1981) 297–356. +– 20 – + +[68] H. Minakata and H. Nunokawa, Exploring neutrino mixing with low-energy superbeams, +JHEP 10 (2001) 001, [hep-ph/0108085]. +[69] S. K. Agarwalla, S. S. Chatterjee, A. Dasgupta, and A. Palazzo, Discovery Potential of T2K +and NOvA in the Presence of a Light Sterile Neutrino, JHEP 02 (2016) 111, +[arXiv:1601.05995]. +[70] P. Huber, M. Lindner, and W. Winter, Simulation of long-baseline neutrino oscillation +experiments with GLoBES (General Long Baseline Experiment Simulator), Comput. Phys. +Commun. 167 (2005) 195, [hep-ph/0407333]. +[71] P. Huber, J. Kopp, M. Lindner, M. Rolinec, and W. Winter, New features in the simulation +of neutrino oscillation experiments with GLoBES 3.0: General Long Baseline Experiment +Simulator, Comput. Phys. Commun. 177 (2007) 432–438, [hep-ph/0701187]. +[72] P. Huber, M. Lindner, and W. Winter, Superbeams versus neutrino factories, Nucl. Phys. B +645 (2002) 3–48, [hep-ph/0204352]. +[73] G. L. Fogli, E. Lisi, A. Marrone, D. Montanino, and A. Palazzo, Getting the most from the +statistical analysis of solar neutrino oscillations, Phys. Rev. D 66 (2002) 053010, +[hep-ph/0206162]. +– 21 – + diff --git a/X9AyT4oBgHgl3EQfiPgb/content/tmp_files/load_file.txt b/X9AyT4oBgHgl3EQfiPgb/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..a1cef5e0322b5c6724751873ad9892ba7ca5b31d --- /dev/null +++ b/X9AyT4oBgHgl3EQfiPgb/content/tmp_files/load_file.txt @@ -0,0 +1,1176 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf,len=1175 +page_content='Prepared for submission to JHEP Impact of CP violation searches at MOMENT experiment with sterile neutrinos Kiran Sharmaa , Sudhanwa Patraa aDepartment of Physics, Indian Institute of Technology Bhilai, Raipur-492015, India E-mail: kirans@iitbhilai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='in, sudhanwa@iitbhilai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='in Abstract: We examine the scope of the MOMENT experiment in the context of CP violation searches with the presence of extra eV scale sterile neutrino.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' MOMENT is a proposed short baseline neutrino oscillation experiment using muon beams for neutrinos production, making it advantageous over π0 background and other technical difficulties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We work over the first oscillation maxima which matches the peak value of flux with a run time of 5 years for both neutrino and anti-neutrino modes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We perform the bi-probability studies for both 3 and 3+1 flavor mixing schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The CP violation sensitivities arising from the fundamental CP phase δ13 and unknown CP phase δ14 are explored at the firm footing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The slight deteriorates are observed in CP violations induced by δ13 as the presence of sterile neutrino is considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We also look at the reconstruction of CP violations phases δ13 and δ14 and the MOMENT experiment shows significant capabilities in the precise measurement of δ13 phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Keywords: Neutrino Oscillation, Medium-baseline, Sterile Neutrino, MOMENT arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='00390v1 [hep-ph] 1 Jan 2023 Contents 1 Introduction and Motivation 1 2 Transition probability in the 4-flavor scheme 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 Theoretical framework 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 Appearance probability P 4ν µe 4 3 Discussion at probability level 6 4 Biprobability analysis 7 5 Experimental Details and Event Spectra 10 6 CP violation sensitivities in presence of sterile neutrino 12 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 Chi-square analysis 12 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 Reconstructing the CP phases 13 7 Conclusions and Outlook 15 8 APPENDIX 15 A Detailed Description of Biprobability analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 15 1 Introduction and Motivation Neutrino physics has made tremendous progress in the past few years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The discovery of the phenomenon of neutrino oscillations [1–8] has not only answered the solar- neutrino problem [9–12] and atmospheric-neutrino problem [13–16], but also it has open up the new doors to look at the beyond standard model physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' At present, as per the standard model, there are three active flavors of neutrinos associated with the electron, muon and, tau lepton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The neutrino oscillations in the presence of three active neutrinos are described by six oscillation parameters: two mass square differences i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' ∆m2 21 = m2 2−m2 1, ∆m2 31 = m2 3−m2 1, three neutrino mixing angles θ12, θ13, θ23 and one Dirac CP phase δCP .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The values of the oscillations parameters have been constrained by various neutrino experiments [17–24], the only unknown oscillation parameters left to be known precisely are the sign of atmosphere mass -squared difference i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' ∆m2 31 which will resolve the issue of neutrino mass ordering, the precise measurement of mixing angle θ23 which will help in establishing the correct octant and the value of fundamental CP phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The long baseline experiments T2K [25] and NOνA [26] have independently reported the value of δCP but there is a slight tension between the two values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' This discrepancy may be because of the systematic uncertainties or – 1 – it may be an indication of the new physics arising from the presence of sterile neutrinos [27– 35] or non-standard interactions [36–40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Certain short baseline anomalies are coming from the reactor and Gallium [41, 42], accelerator experiments LSND [43] and MiniBooNE [44] which hints towards the existence of a hypothetical fourth sterile mass eigenstate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The presence of such right-handed sterile neutrinos will extend the structure of the standard 3 flavor framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' As a result, the number of neutrino oscillation parameters gets enhanced in the minimal extended 3 + 1 framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' There are now three additional mixing angles i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' θ14, θ24, θ34 and two more CP phases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We have the new mass square differences arising from the mixing among fourth mass eigen state ν4 with mass eigenstates of three active flavors (νe, νµ and, ντ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The presence of sterile neutrino has been widely explored in the literature (see references [45– 51]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The effect of sterile neutrinos over the standard oscillation parameters have been widely explored in the current and future proposed neutrino oscillation experiments like T2HK [52],T2HKK [53], and DUNE [54–57].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' An important point to remark is that all the above-stated experimental facilities are using the neutrino beams originating from the pions decay, whereas in the proposed work, we look at the potential of upcoming neutrino oscil- lation experiment MOMENT(MuOn-decay MEdium baseline NeuTrino beam facility) [58] which observe a neutrino beam produced from decaying muons at relatively low energies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' As an advantage one can skip the background effects and other technical difficulties at the experimental level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' MOMENT is a proposed short baseline neutrino oscillation experiment with a baseline of 150 km.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' A detailed description of the experimental facility is provided in section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The matter effects arising from the interaction of neutrinos with the matter potential as they propagate through it are negligible in context with the long baseline neutrino experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' As a result, the fake CP violation scenarios arising from matter terms can be avoided in the MOMENT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The comparative analysis for the precise measurement of delta CP phase has been made among MOMENT, T2K, NOνA, T2HK, T2HKK, and DUNE where it has been discussed that MOMENT can significantly improve the bounds over δCP [59].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Furthermore, the effect of non-standard interactions and their effect over the CP phase has been looked at in references( [59–62]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' To the best of our knowledge, the effect of sterile neutrinos in the MOMENT facility is not been studied to a much large extent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' In this work, we look at the influence of sterile neutrino over the precise measurement of the fundamental CP phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' This work aims to study the capabilities of the MOMENT experiment and put it into context in the global experimental effort in neutrino physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' This motivates us to study the physics potential of MOMENT experiment in the pres- ence of a hypothetical right-handed eV scale sterile neutrino.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We present a detailed dis- cussion of the behavior of the 4-flavor transition probabilities and understand the bird’s eye view of CP trajectory curves in the standard 3-flavor scheme and extend it to our framework of the 3+1 flavor scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We perform a prospective study addressing the CP violation sensitivities and look at the degeneracies among different CP phases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The paper is organized as follows: In the next section, we develop the theoretical understanding of the neutrino and anti-neutrino oscillation probabilities in presence of a – 2 – sterile neutrino.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The numerical results are performed using a bi-probability detailed study in section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The experimental details and the corresponding event spectrum are explored in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We focus on the CP violation study and the reconstruction among different CP phases in section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The paper is concluded with a summary of prospective studies perform in our work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 2 Transition probability in the 4-flavor scheme 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 Theoretical framework The formalism of 3 + 1 neutrino oscillation can be understood in terms of time dependent Schrödinger equation in the mass basis as, i∂νj ∂t = H0νj, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1) with j = 1, 2, 3, 4 and H0 is defined as the effective Hamiltonian in the mass basis and νj being the neutrino mass eigenstate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The effective flavor dependent Hamiltonian for 3 + 1 neutrino oscillation including matter effects is given by H4ν = U4ν � ���� 0 0 0 0 0 ∆m2 21/2E 0 0 0 0 ∆m2 31/2E 0 0 0 0 ∆m2 41/2E � ���� U † 4ν � �� � = Hvac + � ���� VCC 0 0 0 0 0 0 0 0 0 0 0 0 0 0 −VNC � ���� � �� � = Hmat , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2) In case of 3 + 1 scenario, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='e, for three active neutrinos and one sterile neutrino, the mixing matrix U4ν can be parameterised as U4ν = R � θ34, δ34 � R � θ24, 0 � R � θ14, δ14 � R � θ23, 0 � R � θ13, δ13 � R � θ12, 0 � ≡ R � θ34, δ34 � R � θ24, 0 � R � θ14, δ14 � U3ν (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3) where U3ν = R � θ23, 0 � R � θ13, δ13 � R � θ12, 0 � is the standard three flavor neutrino mixing matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The mass eigenstates are related to flavor eigenstates as νj = � U4ν � jανα.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The 4 × 4 real rotation matrices Rij (here R(θ24), R(θ23), R(θ12)) in the (i, j) plane with 2 × 2 sub matrices are defined as: Rij = � cij sij −sij cij � (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='4) while the complex rotation matrices (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' R(θ34, δ34), R(θ14, δ14, and R(θ13, δ13))in the (i, j) plane are defined by: ˜ Rij = � cij sij −s∗ ije−iδij cij � (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='5) – 3 – with cij = cos θij and sij = sin θij.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The parametrization of unitary mixing matrix allows the conversion probability to independent of mixing angle θ34 and the corresponding delta phase δ34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' In the S-matrix formalism, the neutrino flavor changes after propagating a distance L is defined in terms of an evolution matrix S as να � L � = Sαβνβ � 0 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='6) The key point is that the evolution matrix S satisfies the same Schrödinger equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' After some simplification, the form of evolution matrix can be expressed in term H4ν as Sβα = [ exp(−iH4νL)]βα , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='7) The final expression of neutrino oscillation probability from να to νβ with neutrino energy E and baseline L is expressed in terms of evolution matrix as, P(να → νβ) = �� Sβα ��2 (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='8) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 Appearance probability P 4ν µe The 3 + 1 appearance probability for νµ → νe transition has been derived in ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [63] and is related to three flavour neutrino transition probability P 3ν µe and other interference terms arising from sterile neutrino mixing parameters as follows, P 4ν µe ≈ � 1 − s2 14 − s2 24 � P 3ν µe + 4s14s24s13s23 sin ∆31 sin(∆31 + δ13 − δ14), − 4s14s24c23s12c12 sin(∆21) sin δ14, + 2s2 14s2 24 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='9) with ∆ij = ∆m2 ijL 4E .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The expression for P 3ν µe is sum of three contributions i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='e, atmospheric P ATM, solar P SOL and their interference term P INT I .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Looking at the neutrino oscillation parameters from recent global fit values displayed in Table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 s23 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='76 , c23 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='65 , s12 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='56 , c12 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='83 , s13 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='15 , c13 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='99 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='10) The sine of the reactor mixing angle s13 is treated as a small parameter in comparison to other mixing angles and is taken to be of the order of O(ε) ≈ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='15 while all other sines and cosines are O(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The other parameter which is the ratio between two mass-square differences can be considered as |α| = ∆m2 21 |∆m2 31| ≈ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='03 ≃ ε2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Also, parameters having sine of the sterile mixing angles sin θ14 and sin θ24 are considered to be small and are of the order of ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' However, the other sterile neutrino mixing angle sin θ34 and the corresponding phase δ34 are taken to be zero as the vacuum probability expression is independent of these – 4 – Parameters True Value Marginalization Range sin2 θ12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='318 Not Marginalized sin2 θ13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='022 Not Marginalized sin2 θ23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='574 [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='38,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='64] sin2 θ14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='02 Not Marginalized sin2 θ24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='02 Not Marginalized sin2 θ34 0 Not Marginalized δ13 [-180,180] [-180,180] δ14 [-180,180] Not Marginalized δ34 0 Not Marginalized ∆m2 21(eV 2) 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='50 × 10−5 Not Marginalized ∆m2 31(eV 2) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='55 × 10−3 [2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='4, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='7] × 10−3 ∆m2 41(eV 2) 1 Not Marginalized Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The various parameters and their true value used to simulate the data in GLoBES are mentioned in columns 1 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The third column depicts the range over which sin2θ23 , δ13 , and ∆m2 31 are varied while minimizing the χ2 to obtain the final results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' contributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Retaining term up to third order in ε the appearance probability P 4ν µe can be simplified as sum of three contributions, P 4ν µe ≃ P ATM + P INT I + P INT II .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='11) with individual contributions are given by P ATM≃ 4s2 23s2 13 sin2 ∆ , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='12) P INT I ≃ 8s13s12c12s23c23(α∆) sin ∆ cos(∆ + δ13) , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='13) P INT II ≃ 4s14s24s13s23 sin ∆ sin(∆ + δ13 − δ14) , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='14) where ∆ ≡ ∆31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The first term coming from the atmospheric oscillating parameter ∆31 is a positive quantity providing the leading order contribution to the transition probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The sub- leading contribution are arising from the interferences of different frequencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The term P INT I corresponds to the interference of solar and atmospheric frequencies while the term P INT II is connected with the interference of atmospheric and sterile frequencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' One can infer from the expression of P INT I and P INT II that the interference induced by the presence of sterile neutrino is not proportional to ∆ while the solar-atmospheric interference term is directly related with it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' As a result, the numerical simulation carried out for the MOMENT experiment working over the first oscillation maxima has a good performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' – 5 – 3 Discussion at probability level The impact of matter effects on the appearance probabilities is marginal as MOMENT is a short baseline experiment with a baseline of 150km.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Thus, the vacuum probabilities expressions can be used effectively while neglecting the suppressed contributions from the MSW effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' For illustration, let us consider the transition probability in the presence of matter effects up to leading order as [63–66], P ATM m ≃ (1 + 2v)P ATM , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1) with the correction factor, v = V k ≡ 2V E ∆m2 31 , with, V = √ 2GF Ne (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2) For the MOMENT experiment with baseline 150km and considering first maxima peak energy E ≈ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3GeV, the correction factor is estimated to be v = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='048.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The correction factor for NOνA experiment is v ∼ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='17 while taking the benchmark value of peak energy E = 2 GeV and that is the reason why matter effects are important for long baseline studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We look at the variation of appearance probability channel (νµ → νe) and (νµ → νe) as a function of energy for the fixed value of the fundamental CP phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The value of δ13 is kept fixed at 0◦ and −90◦ while the Dirac phase δ14 is allowed to vary over different values as mentioned in the legends in fig 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The value of the remaining oscillation parameters has been kept fixed at the benchmark values mentioned in table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The simulations are performed over the constant value of matter density, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='7 g/cc considered using the Preliminary Reference Earth Model(PREM) [67].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We have taken the value of the sterile mass square difference to be ∆m2 41 = 1eV 2, as a result, the oscillation induced by the sterile presence is averaged out by the finite energy resolution of the detector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We look at the averaged-out behavior of the appearance probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' One can also recall that the anti-neutrino appearance probability can be obtained from that of neutrinos by changing the sigh of the matter potential and the fundamental and sterile CP phases i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Pνα→νβ(V, δ13, δ14) = Pνα→νβ(−V, −δ13, −δ14) (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3) As evident from eqn 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1, the leading order contribution to the appearance probability will be more for neutrinos with V > 0 than anti-neutrinos with V < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The solid black line refers to the 3 flavor appearance probability while the colored lines correspond to the different 3 + 1 flavor mixing scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The blue line indicates the contribution arising from δ14 = −90◦, orange for δ14 = 0◦, green for δ14 = 90◦ and magenta for δ14 = 180◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The left column of the figure represents the appearance probability for neutrinos while the right column explains the behavior of anti-neutrino appearance probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The curves are peaked around the first oscillation maxima for MOMENT experiment i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='e ≈ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3 GeV .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' One can notice that the amplitude and shape of probability are strongly dependent on the value of sterile CP phase δ14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' There is a decrease in the amplitude for the anti-neutrino oscillation probability as expected in accordance to eqn 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 and which will be further understood for the event spectrum in section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Moreover, the plot also shows the mutual swapping for curves as one makes a transition from neutrino to anti-neutrino probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' – 6 – 3ν δ14=-90° δ14=0° δ14=90° δ14=180° 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='10 E [GeV] P(νμ→νe) δ13=0° 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='10 E [GeV] P(νμ→νe) δ13=0° 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='10 E [GeV] P(νμ→νe) δ13=-90° 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='10 E [GeV] P(νμ→νe) δ13=-90° Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Transition probabilities for neutrinos and anti-neutrinos after averaging over the fast oscillations are plotted against energy(varying from 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='8 GeV) with matter density kept fixed at 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='7 g/cc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The oscillation probability for 3 flavor is shown by black curve while the colored curve represents the oscillation probability in 3+1 mixing scenario for four different values of δ14 i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' −90◦,0◦,90◦, and 180◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The left column corresponds to the neutrino transition probabilities for two different values of δ13 whereas the right one is for anti-neutrino transition probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 4 Biprobability analysis The bi-probability curves as the name suggest are the parametric curves that allow us to trace the bi-probability space spanned together by the neutrino and anti-neutrino oscillation probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The idea of bi-probability curves in the 3 flavor scheme was first introduced in the work [68].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The oscillation probability in the 3 flavor scheme is defined as: P = P0 + A(cos ∆ cos δ13 − sin ∆ sin δ13) (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1) P = P0 + A(cos ∆ cos δ13 − sin ∆ sin δ13) (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2) where A = 8s13s12c12s23c23(α∆31) sin ∆31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' and P0 ≈ (1 + 2v)P ATM Thus, the bi-probability curves display the effects coming from the fundamental CP phase(sin δ13 term which is CP violating, cos δ13 term which is CP conserving), and the matter effects arising from the presence of term A, all in a single diagram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Thus, bi-probability provides us with a bird’s eye view important for the mass hierarchy and CP violation studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Since – 7 – the MOMENT is a short baseline facility, it is not worth performing the analysis for mass hierarchy sensitivities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' In this work, we fix ourselves to a normal hierarchy and look at the CP trajectory diagrams for neutrino and anti-neutrino oscillation probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The fundamental CP phase δ13 is varied in the range from −π to π and biprobability space is spanned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' As the probabilities involve the periodic function sine and cosine, as a result, the space spanned must form a closed trajectory as the δ13 is varied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We generalize the bi-probability representation to the 3+1 flavor scheme where the value of δ14 is kept fixed at different values while δ13 is varied over the entire range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Under the adiabatic approximation, the equation 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2, can be re-expressed as: P = l cos δ13 + m sin δ13 + n (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3) P = l cos δ13 − m sin δ13 + n (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='4) A deeper understanding of the CP-trajectory curves for 3 flavor analysis can be obtained by eliminating the δ13 factor from above equations and under the assumption A = A which holds true in vacuum, we obtain the equation followed by neutrino and anti-neutrino probability as: �P + P − 2n 2l �2 + �P − P 2m �2 = 1 (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='5) This is the equation of an ellipse where the lengths of the major and the minor axes are measures for the coefficients of sin δ13 and cos δ13, respectively, in the oscillation probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Further, the minor or major axis is always inclined at an angle of 45◦ as visible in bi- probability plots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The idea of bi-probability plots in 3 flavor scheme can be extended to 3 + 1 flavor analysis where we have some additional sources for CP violation arising by sterile neutrino presence as explored in [69] and the references therein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The analytic expression for the neutrino transition probabilities in case of 3+1 neutrino oscillation is recast into a compact form as, P(ν) ≡ P = P0 + A cos � ∆ + δ13 � + B sin � ∆ − δ14 + δ13 � (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='6) where the first term P0 = P ATM ≃ 4s2 23s2 13 sin2 ∆ is independent of phase factor while the factors which are independent of phase contained in second and third term are, A = 8s13s12c12s23c23(α∆) sin ∆ and B = 4s14s24s13s23 sin ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' After a few simplifications using trigonometric relations, the transition probability given in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1) modifies to P = P0 + A′ cos δ13 + B′ sin δ13 (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='7) Similarly, the simplified antineutrino transition probability is given by P = P 0 + A ′ cos δ13 − B ′ sin δ13 (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='8) where the coefficients A′, B′,A ′ and B ′ are defined as follows A′ = A cos ∆ + B sin � ∆ − δ14 � (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='9) – 8 – B′ = −A sin ∆ + B cos � ∆ − δ14 � (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='10) A ′ = A cos ∆ + B sin � ∆ + δ14 � (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='11) B ′ = −A sin ∆ + B cos � ∆ + δ14 � (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='12) The detailed derivation for obtaining the CP-trajectory is mentioned in the appendix ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' obtained by eliminating the δ13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The angle of inclination is obtained by comparing the trajectory equation with the general equation of ellipse and is obtained as: tan 2θ = (B2 − A2) cos 2∆ − 2AB sin 2∆ cos δ14 2AB sin δ14 (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='13) It is clear from the expression that the orientation of the ellipse is strongly dependent on the value of sterile CP phase δ14 and the parameter ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The following conclusions can be marked depending on the values of these parameters: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Firstly, under the vanishing condition of factor sin δ14 (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' δ14 = nπ), the inclination becomes θ = π/4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The orientation of the elliptical trajectory can be determined by looking at the sign of numerator term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' If the numerator is a positive definite quantity the orientation is counter-clockwise while it is clockwise for negative numerator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Also, if ∆ ≈ nπ/2, which is true for MOMENT experiment which works in the first oscillation maxima, the inclination angle simplifies as tan 2θ ≈ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='366 sin δ14 (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='14) Now there are again two possible inclinations for δ14 → (2n − 1)π/2, depending upon the value of n (where n = 1, 2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' For the positive sign in denominator, the inclination of minor axis is ≈ 10◦ whereas for the negative sign the major axis is inclined by ≈ −10◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The discussion can be easily confirmed from the bi-probability plots in fig 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We have shown the dependency of ellipse inclination for four different values of δ14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The parameter δ13 is varied over the complete allowed range [−π, π] while δ14 is kept fixed for each particular plot as mentioned in the legend.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The solid black ellipse is drawn for the variation of neutrino and anti-neutrino probabilities in the 3-flavor scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The centers of the ellipses in the 3+1 flavor scheme are almost coinciding with the centers in the 3 flavor scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The slight variation is arising from the negligible matter effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We have marked special symbols for highlighting four different fundamental CP phase values i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='e δ13 = −180◦, −90◦, 0◦, 90◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' For δ14 = −180◦ and δ14 = 0◦, the inclination is 45◦ as seen by the orange and magenta curves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' There is a swapping among the values of δ13 phase pointing towards the tracing of elliptical trajectory as expected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' While the blue and green curves plotted under δ14 = −90◦, 90◦ are inclined by −10◦ and 10◦ respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' – 9 – 3ν δ14=0° 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='07 P(νμ → νe) P(νμ → νe) (δ13 =-180°) (δ13 =-90°) (δ13 =0°) (δ13 =90°) 3ν δ14=180° 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='07 P(νμ → νe) P(νμ → νe) (δ13 =-180°) (δ13 =-90°) (δ13 =0°) (δ13 =90°) 3ν δ14=-90° 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='07 P(νμ → νe) P(νμ → νe) (δ13 =-180°) (δ13 =-90°) (δ13 =0°) (δ13 =90°) 3ν δ14=90° 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='07 P(νμ → νe) P(νμ → νe) (δ13 =-180°) (δ13 =-90°) (δ13 =0°) (δ13 =90°) Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Bi-probability plots under the normal hierarchy are shown for both 3 flavor and 3+1 flavor mixing scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The CP trajectory diagram for the 3-flavor is drawn by black color while the orange, magenta, blue and green closed curves corresponds to the fixed values of δ14 phase as mentioned in each legend.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The neutrino energy is kept fixed at its first oscillation peak value of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3 GeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The CP-phase δ13 is varied in the range [−π, π].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Four different values of δ13 phase (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 0◦, 180◦, −90◦, and 90◦) are marked by different symbols for comparing the orientation of ellipses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 5 Experimental Details and Event Spectra In this section, we shed light on the specifications of the experimental setup MOMENT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' As the name suggests MOMENT is a muon-decay medium-baseline neutrino beam facility that uses a continuous proton beam of energy, 15 MW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Since ideally, it is very difficult for any solid target to withstand such a high power, so mercury jets are adopted as the target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Neutrinos are produced from the muons in a pion decay channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The µ+ decays via channel µ+ → ¯νµνee+ while µ− decays as µ− → νµ¯νee−.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Thus, we have the four neutrino flavors i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' νe, νµ, ¯νe and ¯νµ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Hence, the MOMENT setup would allow the study of eight oscillation channels (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' νe → νe, νe → νµ, νµ → νe, νe → νµ as well as their corresponding CP-conjugate partners).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' To provide sensitivity towards CP-violating, the distinction between the neutrinos and anti-neutrinos is achieved by 500 kilo ton fiducial Gd-doped Water Cherenkov detector as the baseline detector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The final neutrons captured on Gd provides us a way to distinguish neutrinos from anti-neutrinos with the known interactions of neutrinos with nucleons as νe+n → p+e−, νµ+n → p+µ−, ¯νe+p → n+e+, ¯νµ + p → n + µ+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The number of the proton on target (POT) with a proton beam power – 10 – Characteristics MOMENT Beam power 15MW Fiducial Detector mass 500 kton Gd doped Water Cherenkov Baseline 150 km Flux peaks at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3 GeV First Oscillation maxima ≈ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3 GeV Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The experimental specifications of MOMENT experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' of 15 MW and with a proton energy of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='5 GeV can be calculated as: x = W × y × t × 1016 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='6 × Ep (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1) where W is beam power and Ep is the proton energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The operational time in one year is represented by t and is roughly ≈ 107 seconds, while y is number of years protons deposited on the target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' For MOMENT experiment, POT will be 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2×1022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The fluxes for neutrinos and anti-neutrinos arising from 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='5 GeV peaks around 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='030 GeV while the total energy is varied in range from 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='010 − 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='80 GeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The simulations are performed by assuming the equal run rate for neutrino and anti-neutrino mode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We used an uncertainty 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='5% on the signal and 5% uncertainty over the background for both neutrinos and anti-neutrino modes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' δ14=-90° δ14=0° δ14=90° δ14=180° 3ν 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='6 0 10 20 30 40 50 60 Reconstructed Energy(GeV) νe Appearance Events δ14=-90° δ14=0° δ14=90° δ14=180° 3ν 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='6 0 5 10 15 20 25 30 Reconstructed Energy(GeV) νe Appearance Events Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The expected number of signal events are plotted against the reconstructed neutrino energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The black curve refers to 3 flavor case while the colored histograms are for (3+1) scheme with different values of δ14 as mentioned in the legend.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The left panel corresponds to νe appearance events while the right one is for νe appearance events.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' From the existing literature, we know that the number of events in the i-th energy bin are given by Ni = Tnnϵ 4πL2 � Emax 0 dE � Emax Ai Emin Ai dEAφ(E)σνe(E)R(E, EA)Pµe(E) (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2) where T is the total running time, nn is the number of target nucleons in the detector, ϵ is the detector efficiency, φ(E) is the neutrino flux, σνe is the neutrino interaction cross- section and R(E,EA) is the Gauβian energy resolution function of detector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The quantities – 11 – E and EA are the true and reconstructed (anti-)neutrino energies respectively and L is the baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The numerical results have been performed using the GLoBES software [70, 71] and its additional toolkit required to incorporate the new physics arising from the presence of sterile neutrino.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We plot number of events against the reconstructed energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' It is found that the maximum flux peaks about 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3 GeV, so we have maximum number of events in that energy bin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' In the plot, the the thick black lines depicts the 3 flavor scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The other colored histograms are drawn in the 3+1 scenario for the different values of δ14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The number of νe appearance events are almost double of the number of events of νe events since the cross section of neutrino interaction is very much different from the anti-neutrino interaction cross-section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 6 CP violation sensitivities in presence of sterile neutrino 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 Chi-square analysis The sensitivity of an experiment is to determine the precise values of the oscillation param- eters by performing the χ2 square analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' It is performed by comparing the simulated true event rates from the present best fit data with the events generated by the test hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Also, the theoretical uncertainties and the systematic errors at the experimental level are incorporated using the method of pulls in calculating χ2 as [72, 73], χ2(ptrue, ptest) = min ξ � � i ∈bins (Ntrue i − Ntest i (ξ))2 Ntrue i + ξ2 σ2 ξ � (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1) where the nuisance parameter is denoted by ξ and the corresponding systematic error is presented by σξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The terms involving the nuisance parameters are called pull terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' In order to counter the effect of systematic errors, the penalty term ξ2 is added.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The nuisance parameters are dependent on the fiducial mass of the detector used in a particular experiment as well as on the other experimental properties like flux normalization and cross section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The minimisation of χ2 is obtained by marginalizing the oscillation parameter space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Therefore, one add another penalty terms called priors to χ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The mathematical expression for the prior is given by χ2 prior = �Ntrue − Ntest σtrue �2 (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2) In our analysis, we looked at the sensitivity of MOMENT experiment to determine the precise value of CP phase in the presence of sterile neutrino.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' In the three flavor scenario as seen by eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1), the CP violations are induced by the presence of sin δ13 term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The χ2 analysis is carried out with the statistical significance at which we can reject no CP violation test hypothesis as, χ2 = (N(δtr CP) − N(δtest CP = 0, 180))2 N(δtr CP) (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3) – 12 – We have fixed the mixing angles θ12, θ13 and θ14 to their best fit values as mentioned in table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 in both true and test spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We look at the marginalization over the parameters θ23 and the mass-squared difference ∆m2 31 in two different ways as follows 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' First Case: ∆m2 31 is kept fixed for both 3 flavor and 3+1 flavor scheme while θ23 is marginalized over the range mentioned in table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Second Case: Both ∆m2 31 and θ23 are marginalized in 3 and 3+1 flavor scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Also the projected information on ∆m2 31 is added in the form of priors as χ2 prior = �∆m2 31(true) − ∆m2 31(test) σ(∆m2 31) �2 (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='4) where σ(∆m2 31) is the 1σ error on ∆m2 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The numerical results are displayed in figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The left-panel of the figure corresponds to the fixed value of ∆m2 31 and varying θ23 while right-panel of the figure corresponds to the second case of marginalizing over both the mass hierarchy ∆m2 31 and θ23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The solid black solid line represents the estimated value of ∆χ2 for three flavor scenario while all other color lines depict the analysis for 3+1 flavor scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' While performing the χ2 analysis for discovery potential of CP phases we have considered equal neutrino and anti-neutrino mode for a run time of (5+5) years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' In each case we have considered four different values of true δ14 phase as considered in probability analysis while its test value is marginalized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The figure clearly depicts that CP sensitivities decreases in the presence of sterile neutrino, primarily because of the degeneracies between the fundamental and active-sterile CP phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 3ν δ14=-90° δ14=0° δ14=90° δ14=180° 150 100 50 0 50 100 150 0 10 20 30 40 50 60 δ13(true) Δχ2 7σ 150 100 50 0 50 100 150 0 10 20 30 40 50 60 δ13(true) Δχ2 Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The figure indicated the potential of MOMENT experiment for the discovery of CP violation induced by the fundamental phase δ13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The black curve shows the behavior under 3 flavor scheme while the colored curves in each panel are of different values of δ14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The left figure ∆m2 31 is kept fixed for both 3 and 3+1 flavor mixing while we marginalize over θ23 value whereas in the right figure, we marginalised over both hierarchy ∆m2 31 and θ23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 Reconstructing the CP phases In the last subsection, we looked at the CP-violation discovery potential of the MOMENT experiment by performing χ2 analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' But there is an another way to extract the comple- – 13 – mentary information by reconstructing the values of two CP phases δ13 and δ14, indepen- dent of the amount of CP violation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We look at the contour plots by reconstructing the CP phases in δ13 − δ14 plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The test values of both the CP phases are varied from (−π, π) and contours are shown at one, two, and three sigma level, simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The four plots represents the regions reconstructed for the four benchmark values considered in figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The solid black dot in the upper row represents CP-conserving scenarios with values (0,0), (π, π) while the bottom panel has CP-violating picture with (−π/2, −π/2), (π/2, π/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Our results are predicts remarkable sensitivity for determining the precise value of the CP phase δ13 in the presence of sterile neutrino.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 150 100 50 0 50 100 150 150 100 50 0 50 100 150 δ13(test) [Degrees] δ14(test) [Degrees] 1σ 2σ 3σ (0,0) 150 100 50 0 50 100 150 150 100 50 0 50 100 150 δ13(test) [Degrees] δ14(test) [Degrees] (π,π) 150 100 50 0 50 100 150 150 100 50 0 50 100 150 δ13(test) [Degrees] δ14(test) [Degrees] (-π/2,-π/2) 150 100 50 0 50 100 150 150 100 50 0 50 100 150 δ13(test) [Degrees] δ14(test) [Degrees] (π/2,π/2) Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Contour plots in the plane of δ13 (test) and δ14 (test) for four different values of CP phases δ13 and δ14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The discovery potential of δ13 in the contour plot of δ13 vs δ14 plane has been presented by blue, red, and green band with 1σ, 2σ, and 3σ confidence level respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The left(right) panel of first row corresponds to (0,0) and (π, π) respectively while the left(right) panel of second row corresponds to (−π/2, −π/2) and (π/2, π/2) – 14 – —————————– 7 Conclusions and Outlook In the work, we have addressed the potential of MOMENT experiment to emphasis the role of sterile neutrino presence over the standard three flavor oscillation parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We have shown the transition probabilities for both neutrinos and anti-neutrinos in 3+1 flavor scheme and looked at the space spanned by the CP trajectory curves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The performance of MOMENT in understanding the CP violation sensitivities induced by the fundamental CP phase δ13 and new CP phase arising from active-sterile mixing has been explored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We found that loss of CP sensitivity is dependent on the different values of δ14 phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The discovery potential of CP violation in 3 flavor scheme is quite significiant at 7 σ level though it gets reduced in the presence of unknown CP phase δ14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The reduction might has arose from the degeneracies among the two CP phases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' We have also assessed the capability of MOMENT experiment in reconstructing the true values of such CP-phases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Acknowledgments Kiran Sharma would like to thank Ministry of Education for the financial support for carrying out this research work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' KS is very thankful to Dr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Sabya Sachi Chatterjee for the fruitful discussion carried time to time for the betterment of this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 8 APPENDIX A Detailed Description of Biprobability analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' The analytic expression for the neutrino transition probabilities in case of 3 + 1 neutrino oscillation are recasted into a compact form as, P(ν) ≡ P = P0 + A cos � ∆ + δ13 � + B sin � ∆ − δ14 + δ13 � (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1) where the first term P0 = P ATM ≃ 4s2 23s2 13 sin2 ∆ is independent of phase factor while the factors which are independent of phase contained in second and third term are, A = 8s13s12c12s23c23(α∆) sin ∆ and B = 4s14s24s13s23 sin ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' After few simplification using trigonometric relations, the transition probability given in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1) modifies to P = P0 + A′ cos δ13 + B′ sin δ13 (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2) Similarly, the simplified antineutrino transition probability is given by P = P 0 + A ′ cos δ13 − B ′ sin δ13 (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3) where the coefficients A′, B′,A ′ and B ′ are defined as follows A′ = A cos ∆ + B sin � ∆ − δ14 � (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='4) B′ = −A sin ∆ + B cos � ∆ − δ14 � (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='5) – 15 – A ′ = A cos ∆ + B sin � ∆ + δ14 � (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='6) B ′ = −A sin ∆ + B cos � ∆ + δ14 � (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='7) The factors A ′ and B ′ are obtained from known relations of A′ and B′ by replac- ing δ14 → −δ14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Eliminating δ13 from the modified neutrino and antineutrino transition probabilities, we obtain, 1 � A′ B′ + A′ B′ �2 �P − P0 B′ + P − P 0 B ′ �2 + 1 � B′ A′ + B′ A′ �2 �P − P0 A′ − P − P 0 A ′ �2 = 1 (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='8) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ + A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ + B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='P 2 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='′2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ + A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='′2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ + B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ + A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='− ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′A ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ + B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='PP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='′ + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='−2P0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ + A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='−2P0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ + B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='−2P 0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ + A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='−2P 0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′A ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ + B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='−2P 0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='′2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ + A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='−2P 0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='′2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ + B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='−2P0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ + A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='−2P0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′A ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ + B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='P + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ + A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ + B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='P 2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='0 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='′2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ + A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='′2� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ + B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='P0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′B ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ + A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='− ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′A ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ + B′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='A′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='�2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='P0P0 − 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='= 0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='(A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='9) Comparing eqn A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='9 with the general quadratic curve representing the equation of ellipse ax2 + bxy + cy2 + cx + ey + f = 0 (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='10) where the counterclockwise angle of rotation from the x-axis to the major axis of the ellipse is defined by tan 2θ = b a−c, we can understand the inclination of ellipse in the biprobability plots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' a = 1 B′2 � A′ B′ + A′ B′ �2 + 1 A′2 � B′ A′ + B′ A′ �2 (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='11) – 16 – b = 2 B′B ′ � A′ B′ + A′ B′ �2 − 2 A′A ′ � B′ A′ + B′ A′ �2 (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='12) c = 1 B ′2� A′ B′ + A′ B′ �2 + 1 A ′2� B′ A′ + B′ A′ �2 (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='13) Simplifying the results under the assumption that matter effects induce negligible per- turbations on the interference terms(i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' A = A, B = B) and using equations A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='4, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='5, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='6 and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='7, the angle of inclination becomes tan 2θ = (B2 − A2) cos 2∆ − 2AB sin 2∆ cos δ14 2AB sin δ14 (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='14) References [1] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Wolfenstein, Neutrino Oscillations in Matter, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' D 17 (1978) 2369–2374.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [2] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Bilenky, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Giunti, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Grimus, Phenomenology of neutrino oscillations, Prog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 43 (1999) 1–86, [hep-ph/9812360].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [3] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Giunti, Neutrino Flavor States and Oscillations, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' G 34 (2007) R93–R109, [hep-ph/0608070].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [4] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Nunokawa, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Parke, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Valle, CP Violation and Neutrino Oscillations, Prog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 60 (2008) 338–402, [arXiv:0710.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='0554].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [5] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Schwetz, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Tortola, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Valle, Three-flavour neutrino oscillation update, New J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 10 (2008) 113011, [arXiv:0808.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2016].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [6] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Pascoli and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Schwetz, Prospects for neutrino oscillation physics, Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' High Energy Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 2013 (2013) 503401.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [7] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Blennow and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Smirnov, Neutrino propagation in matter, Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' High Energy Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 2013 (2013) 972485, [arXiv:1306.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2903].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [8] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Bellini, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Ludhova, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Ranucci, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Villante, Neutrino oscillations, Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' High Energy Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 2014 (2014) 191960, [arXiv:1310.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='7858].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [9] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Mikheyev and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Smirnov, Resonance Amplification of Oscillations in Matter and Spectroscopy of Solar Neutrinos, Sov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 42 (1985) 913–917.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [10] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Bethe, A Possible Explanation of the Solar Neutrino Puzzle, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 56 (1986) 1305.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [11] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Haxton, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Hamish Robertson, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Serenelli, Solar Neutrinos: Status and Prospects, Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Astron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Astrophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 51 (2013) 21–61, [arXiv:1208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='5723].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [12] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Maltoni and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Smirnov, Solar neutrinos and neutrino physics, Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' A 52 (2016), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 4 87, [arXiv:1507.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='05287].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [13] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Fogli, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lisi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Marrone, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Montanino, Status of atmospheric nu(mu) —> nu(tau) oscillations and decoherence after the first K2K spectral data, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' D 67 (2003) 093006, [hep-ph/0303064].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' – 17 – [14] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Gonzalez-Garcia and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Maltoni, Atmospheric neutrino oscillations and new physics, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' D 70 (2004) 033010, [hep-ph/0404085].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [15] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Kajita, Discovery of Atmospheric Neutrino Oscillations, Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Mod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' A 31 (2016), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 27 1630047.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [16] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Kajita, Nobel Lecture: Discovery of atmospheric neutrino oscillations, Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Mod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 88 (2016), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 3 030501.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [17] Daya Bay Collaboration, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' An et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', New Measurement of Antineutrino Oscillation with the Full Detector Configuration at Daya Bay, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 115 (2015), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 11 111802, [arXiv:1505.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='03456].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [18] RENO Collaboration, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Choi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', Observation of Energy and Baseline Dependent Reactor Antineutrino Disappearance in the RENO Experiment, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 116 (2016), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 21 211801, [arXiv:1511.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='05849].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [19] Double Chooz Collaboration, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Abe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', Improved measurements of the neutrino mixing angle θ13 with the Double Chooz detector, JHEP 10 (2014) 086, [arXiv:1406.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='7763].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [Erratum: JHEP 02, 074 (2015)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [20] Daya Bay Collaboration, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' An et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', Spectral measurement of electron antineutrino oscillation amplitude and frequency at Daya Bay, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 112 (2014) 061801, [arXiv:1310.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='6732].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [21] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Nunokawa, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Parke, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Zukanovich Funchal, Another possible way to determine the neutrino mass hierarchy, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' D 72 (2005) 013009, [hep-ph/0503283].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [22] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' de Gouvea, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Jenkins, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Kayser, Neutrino mass hierarchy, vacuum oscillations, and vanishing |U(e3)|, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' D 71 (2005) 113009, [hep-ph/0503079].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [23] MINOS Collaboration, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Adamson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', Measurement of Neutrino and Antineutrino Oscillations Using Beam and Atmospheric Data in MINOS, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 110 (2013), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 25 251801, [arXiv:1304.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='6335].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [24] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Fogli and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lisi, Tests of three flavor mixing in long baseline neutrino oscillation experiments, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' D 54 (1996) 3667–3670, [hep-ph/9604415].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [25] T2K Collaboration, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Abe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', Constraint on the matter–antimatter symmetry-violating phase in neutrino oscillations, Nature 580 (2020), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 7803 339–344, [arXiv:1910.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='03887].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [Erratum: Nature 583, E16 (2020)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [26] NOvA Collaboration, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Acero et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', Improved measurement of neutrino oscillation parameters by the NOvA experiment, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' D 106 (2022), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 3 032004, [arXiv:2108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='08219].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [27] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Abazajian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', Light Sterile Neutrinos: A White Paper, arXiv:1204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='5379.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [28] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Palazzo, Phenomenology of light sterile neutrinos: a brief review, Mod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' A 28 (2013) 1330004, [arXiv:1302.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='1102].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [29] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Gariazzo, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Giunti, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Laveder, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Li, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Zavanin, Light sterile neutrinos, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' G 43 (2016) 033001, [arXiv:1507.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='08204].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [30] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Giunti, Phenomenology of light sterile neutrinos, Mod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' A 30 (2015), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 20 1530015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [31] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Giunti, Light Sterile Neutrinos: Status and Perspectives, Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' B 908 (2016) 336–353, [arXiv:1512.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='04758].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' – 18 – [32] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Giunti and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lasserre, eV-scale Sterile Neutrinos, Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 69 (2019) 163–190, [arXiv:1901.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='08330].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [33] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Böser, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Buck, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Giunti, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lesgourgues, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Ludhova, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Mertens, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Schukraft, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Wurm, Status of Light Sterile Neutrino Searches, Prog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 111 (2020) 103736, [arXiv:1906.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='01739].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [34] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Kopp, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Machado, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Maltoni, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Schwetz, Sterile Neutrino Oscillations: The Global Picture, JHEP 05 (2013) 050, [arXiv:1303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='3011].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [35] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Sharma and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Patra, Study of matter effects in the presence of sterile neutrino using OMSD approximation, arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='03249.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [36] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Ohlsson, Status of non-standard neutrino interactions, Rept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Prog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 76 (2013) 044201, [arXiv:1209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2710].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [37] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Farzan and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Tortola, Neutrino oscillations and Non-Standard Interactions, Front.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' in Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 6 (2018) 10, [arXiv:1710.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='09360].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [38] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Falkowski, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' González-Alonso, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Tabrizi, Consistent QFT description of non-standard neutrino interactions, JHEP 11 (2020) 048, [arXiv:1910.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='02971].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [39] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Bischer and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rodejohann, General neutrino interactions from an effective field theory perspective, Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' B 947 (2019) 114746, [arXiv:1905.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='08699].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [40] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Ge and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Parke, Scalar Nonstandard Interactions in Neutrino Oscillation, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 122 (2019), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 21 211801, [arXiv:1812.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='08376].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [41] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Mention, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Fechner, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lasserre, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Mueller, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lhuillier, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Cribier, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Letourneau, The Reactor Antineutrino Anomaly, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' D 83 (2011) 073006, [arXiv:1101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='2755].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [42] GALLEX Collaboration, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Hampel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', Final results of the Cr-51 neutrino source experiments in GALLEX, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' B 420 (1998) 114–126.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [43] LSND Collaboration, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Aguilar-Arevalo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', Evidence for neutrino oscillations from the observation of ¯νe appearance in a ¯νµ beam, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' D 64 (2001) 112007, [hep-ex/0104049].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [44] MiniBooNE Collaboration, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Aguilar-Arevalo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', Significant Excess of ElectronLike Events in the MiniBooNE Short-Baseline Neutrino Experiment, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 121 (2018), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 22 221801, [arXiv:1805.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='12028].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [45] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Agarwalla, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Chatterjee, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Palazzo, Signatures of a Light Sterile Neutrino in T2HK, JHEP 04 (2018) 091, [arXiv:1801.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='04855].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [46] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Choubey, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Dutta, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Pramanik, Imprints of a light Sterile Neutrino at DUNE, T2HK and T2HKK, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' D 96 (2017), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 5 056026, [arXiv:1704.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='07269].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [47] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Choubey, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Dutta, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Pramanik, Measuring the Sterile Neutrino CP Phase at DUNE and T2HK, Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' C 78 (2018), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 4 339, [arXiv:1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='07464].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [48] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Haba, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Mimura, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Yamada, θ23 octant measurement in 3 + 1 neutrino oscillations in T2HKK, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' D 101 (2020), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 7 075034, [arXiv:1812.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='10940].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [49] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Coloma, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Forero, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Parke, DUNE Sensitivities to the Mixing between Sterile and Tau Neutrinos, JHEP 07 (2018) 079, [arXiv:1707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='05348].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [50] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Agarwalla, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Chatterjee, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Palazzo, Physics Reach of DUNE with a Light Sterile Neutrino, JHEP 09 (2016) 016, [arXiv:1603.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='03759].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' – 19 – [51] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Berryman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' de Gouvêa, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Kelly, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Kobach, Sterile neutrino at the Deep Underground Neutrino Experiment, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' D 92 (2015), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 7 073012, [arXiv:1507.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='03986].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [52] Hyper-Kamiokande Proto- Collaboration, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Abe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', Physics potential of a long-baseline neutrino oscillation experiment using a J-PARC neutrino beam and Hyper-Kamiokande, PTEP 2015 (2015) 053C02, [arXiv:1502.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='05199].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [53] Hyper-Kamiokande Collaboration, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Abe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', Physics potentials with the second Hyper-Kamiokande detector in Korea, PTEP 2018 (2018), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 6 063C01, [arXiv:1611.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='06118].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [54] DUNE Collaboration, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Acciarri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', Long-Baseline Neutrino Facility (LBNF) and Deep Underground Neutrino Experiment (DUNE): Conceptual Design Report, Volume 2: The Physics Program for DUNE at LBNF, arXiv:1512.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='06148.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [55] DUNE Collaboration, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Acciarri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', Long-Baseline Neutrino Facility (LBNF) and Deep Underground Neutrino Experiment (DUNE): Conceptual Design Report, Volume 1: The LBNF and DUNE Projects, arXiv:1601.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='05471.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [56] DUNE Collaboration, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Strait et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', Long-Baseline Neutrino Facility (LBNF) and Deep Underground Neutrino Experiment (DUNE): Conceptual Design Report, Volume 3: Long-Baseline Neutrino Facility for DUNE June 24, 2015, arXiv:1601.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='05823.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [57] DUNE Collaboration, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Acciarri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', Long-Baseline Neutrino Facility (LBNF) and Deep Underground Neutrino Experiment (DUNE): Conceptual Design Report, Volume 4 The DUNE Detectors at LBNF, arXiv:1601.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='02984.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [58] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Cao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=', Muon-decay medium-baseline neutrino beam facility, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' ST Accel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Beams 17 (2014) 090101, [arXiv:1401.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='8125].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [59] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Tang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Vihonen, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Wang, Precision measurements on δCP in MOMENT, JHEP 12 (2019) 130, [arXiv:1909.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='01548].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [60] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Blennow, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Coloma, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Fernández-Martinez, The MOMENT to search for CP violation, JHEP 03 (2016) 197, [arXiv:1511.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='02859].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [61] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Bakhti and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Farzan, CP-Violation and Non-Standard Interactions at the MOMENT, JHEP 07 (2016) 109, [arXiv:1602.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='07099].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [62] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Tang and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Zhang, Study of nonstandard charged-current interactions at the MOMENT experiment, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' D 97 (2018), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 3 035018, [arXiv:1705.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='09500].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [63] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Klop and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Palazzo, Imprints of CP violation induced by sterile neutrinos in T2K data, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' D 91 (2015), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 7 073017, [arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='7524].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [64] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Cervera, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Donini, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Gavela, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Gomez Cadenas, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Hernandez, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Mena, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rigolin, Golden measurements at a neutrino factory, Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' B 579 (2000) 17–55, [hep-ph/0002108].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [Erratum: Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='B 593, 731–732 (2001)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [65] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Asano and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Minakata, Large-Theta(13) Perturbation Theory of Neutrino Oscillation for Long-Baseline Experiments, JHEP 06 (2011) 022, [arXiv:1103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='4387].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [66] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Agarwalla, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Kao, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Takeuchi, Analytical approximation of the neutrino oscillation matter effects at large θ13, JHEP 04 (2014) 047, [arXiv:1302.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='6773].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [67] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Dziewonski and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Anderson, Preliminary reference earth model, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Earth Planet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Interiors 25 (1981) 297–356.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' – 20 – [68] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Minakata and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Nunokawa, Exploring neutrino mixing with low-energy superbeams, JHEP 10 (2001) 001, [hep-ph/0108085].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [69] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Agarwalla, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Chatterjee, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Dasgupta, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Palazzo, Discovery Potential of T2K and NOvA in the Presence of a Light Sterile Neutrino, JHEP 02 (2016) 111, [arXiv:1601.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='05995].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [70] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Huber, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lindner, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Winter, Simulation of long-baseline neutrino oscillation experiments with GLoBES (General Long Baseline Experiment Simulator), Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 167 (2005) 195, [hep-ph/0407333].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [71] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Huber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Kopp, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lindner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rolinec, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Winter, New features in the simulation of neutrino oscillation experiments with GLoBES 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content='0: General Long Baseline Experiment Simulator, Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' 177 (2007) 432–438, [hep-ph/0701187].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [72] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Huber, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lindner, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Winter, Superbeams versus neutrino factories, Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' B 645 (2002) 3–48, [hep-ph/0204352].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' [73] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Fogli, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Lisi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Marrone, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Montanino, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Palazzo, Getting the most from the statistical analysis of solar neutrino oscillations, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' D 66 (2002) 053010, [hep-ph/0206162].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} +page_content=' – 21 –' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/X9AyT4oBgHgl3EQfiPgb/content/2301.00390v1.pdf'} diff --git a/Y9AyT4oBgHgl3EQfifgI/content/tmp_files/2301.00394v1.pdf.txt b/Y9AyT4oBgHgl3EQfifgI/content/tmp_files/2301.00394v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..0572981dbcdc13be84dd8015fd17b3a0600f1434 --- /dev/null +++ b/Y9AyT4oBgHgl3EQfifgI/content/tmp_files/2301.00394v1.pdf.txt @@ -0,0 +1,4415 @@ +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +1 +Deep Learning Technique for Human Parsing: A +Survey and Outlook +Lu Yang, Wenhe Jia, Shan Li, Qing Song +Abstract—Human parsing aims to partition humans in image or video into multiple pixel-level semantic parts. In the last decade, it has +gained significantly increased interest in the computer vision community and has been utilized in a broad range of practical applications, +from security monitoring, to social media, to visual special effects, just to name a few. Although deep learning-based human parsing +solutions have made remarkable achievements, many important concepts, existing challenges, and potential research directions are still +confusing. In this survey, we comprehensively review three core sub-tasks: single human parsing, multiple human parsing, and video +human parsing, by introducing their respective task settings, background concepts, relevant problems and applications, representative +literature, and datasets. We also present quantitative performance comparisons of the reviewed methods on benchmark datasets. +Additionally, to promote sustainable development of the community, we put forward a transformer-based human parsing framework, +providing a high-performance baseline for follow-up research through universal, concise, and extensible solutions. Finally, we point out a +set of under-investigated open issues in this field and suggest new directions for future study. We also provide a regularly updated project +page, to continuously track recent developments in this fast-advancing field: https://github.com/soeaver/awesome-human-parsing. +Index Terms—Human Parsing, Human Parsing Datasets, Deep Learning, Literature Survey +! +1 +INTRODUCTION +H +UMAN parsing [1]–[5], considered as the fundamental +task of human-centric visual understanding [6], aims to +classify the human parts and clothing accessories in images or +videos at pixel-level. Numerous studies have been conducted +on human parsing due to its crucial role in widespread +application areas, e.g., security monitoring, autonomous +driving, social media, electronic commerce, visual special +effects, artistic creation, giving birth to various excellent +human parsing solutions and applications. +As early as the beginning of this century, some studies +tried to identify the level of upper body clothing [10], the +grammatical representations of clothing [11] and the deforma- +tion of body contour [12] under very limited circumstances. +These early studies facilitated the research on pixel-level +human parts and clothing recognition, i.e., human parsing +task. Immediately afterward, some traditional machine learn- +ing and computer vision techniques were utilized to solve +human parsing problems, e.g., structured model [1], [13], +[14], clustering algorithm [15], grammar model [16], [17], +conditional random field [18]–[20], template matching [21], +[22] and super-pixel [23]–[25]. Afterward, the prosperity +of deep learning and convolutional neural network [26]– +[32] has further promoted the vigorous development of +human parsing. Attention mechanism [33]–[36], scale-aware +features [37]–[40], tree structure [3], [41], graph structure +[4], [42], [43], edge-aware learning [44]–[46], pose-aware +learning [2], [47], [48] and other technologies [49]–[52] greatly +improved the performance of human parsing. However, +some existing challenges and under-investigated issues make +• +Lu Yang, Wenhe Jia, Shan Li, Qing Song are with the Beijing +University of Posts and Telecommunications, Beijing, 100876, China +(e-mail: soeaver@bupt.edu.cn; jiawh@bupt.edu.cn; ls1995@bupt.edu.cn; +priv@bupt.edu.cn) +• +Corresponding author: Qing Song (email: priv@bupt.edu.cn) +(b) Multiple human parsing (MHP) +(a) Single human parsing (SHP) +(c) Video human parsing (VHP) +human +instance +discrimination +temporal +correspondence +learning +parts +relationship +modeling +Fig. 1: Human parsing tasks reviewed in this survey: (a) single +human parsing (SHP) [7]; (b) multiple human parsing (MHP) +[8]; (c) video human parsing (VHP) [9]. +human parsing still a task worthy of further exploration. +With the rapid development of human parsing, several +literature reviews have been produced. However, existing +surveys are not precise and in-depth: some surveys only +provide a superficial introduction of human parsing from a +macro fashion/social media perspective [53], [54], or only +review a sub-task of human parsing from a micro face +parsing perspective [55]. In addition, due to the fuzziness +of taxonomy and the diversity of methods, comprehensive +and in-depth investigation is highly needed and helpful. In +arXiv:2301.00394v1 [cs.CV] 1 Jan 2023 + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +2 +Human Parsing +Challenges +(§2.1) +Relevant Tasks +(§2.3) +Large Intra-class Variation +Unconstrained Poses +Occlusion +Single Human Parsing (SHP) +Multiple Human Parsing (MHP) +Video Human Parsing (VHP) +Pose Estimation +Image Segmentation +Dense Pose Estimation +Person Re-identification +Virtual Try-on +Conditional Human +Image Generation +SHP Models +(§3.1) +MHP Models +(§3.2) +VHP Models +(§3.3) +Context Learning +Structured Representation +Multi-task Learning +Other Modeling Models +Bottom-up +One-stage Top-down +Two-stage Top-down +Cycle-tracking +Reconstructive Learning +Contrastive Learning +SHP Datasets +(§4.1) +VHP Datasets +(§4.3) +ATR +… … +LIP +VIP +SHP +Benchmarking +(§5.1) +VHP +Benchmarking +(§5.3) +MHP Datasets +(§4.2) +PASCAL-Person-Part +… … +CIHP +Outlook +(§6) +A Transformer-based +Baseline +(§6.1) +Under-Investigated +Open Issues +(§6.2) +New Directions +(§6.3) +Efficient Inference +Synthetic Dataset +Long-tailed Phenomenon +Video Instance-level +Human Parsing +Whole-body +Human Parsing +Cooperation across Different +Human-centric Directions +Taxonomy +(§2.2) +Applications +(§2.4) +MHP +Benchmarking +(§5.2) +Fig. 2: Outline of this survey. +response, we provide the first review that systematically +introduces background concepts, recent advances, and an +outlook on human parsing. +1.1 +Scope +This survey reviews human parsing from a comprehensive +perspective, including not only single human parsing (Fig- +ure 1 (a)) but also multiple human parsing (Figure 1 (b)) and +video human parsing (Figure 1 (c)). At the technical level, this +survey focuses on the deep learning-based human parsing +methods and datasets in recent ten years. To provide the +necessary background, it also introduces some relevant litera- +ture from non-deep learning and other fields. At the practical +level, the advantages and disadvantages of various methods +are compared, and detailed performance comparisons are +given. In addition to summarizing and analyzing the existing +work, we also give an outlook for the future opportunities +of human parsing and put forward a new transformer-based +baseline to promote sustainable development of the commu- +nity. A curated list of human parsing methods and datasets +and the proposed transformer-based baseline can be found +at https://github.com/soeaver/awesome-human-parsing. +1.2 +Organization +Figure 2 shows the outline of this survey. §2 gives some +brief background on problem formulation and challenges +(§2.1), human parsing taxonomy (§2.2), relevant tasks (§2.3), +and applications of human parsing (§2.4). §3 provides a +detailed review of representative deep learning-based human +parsing studies. Frequently used datasets and performance +comparisons are reviewed in §4 and §5. An outlook for the +future opportunities of human parsing is presented in §6, +including a new transformer-based baseline (§6.1), several +under-investigated open issues (§6.2) and new directions +(§6.3) for future study. Conclusions will be drawn in §7. +2 +PRELIMINARIES +2.1 +Problem Formulation and Challenges +Formally, we use x to represent input human-centric data, +y to represent pixel-level supervision target, X and Y +to denote the space of input data and supervision target. +Human parsing is to map data x to target y: X �→ Y. The +problem formulation is consistent with image segmentation +[56], but X is limited to the human-centric space. Therefore, +in many literatures, human parsing is regarded as fine- +grained image segmentation. +The central problem of human parsing is how to model +human structures. As we all know, the human body presents +a highly structured hierarchy, and all parts interact naturally. +Most parsers hope to construct this interaction explicitly +or implicitly. However, the following challenges make the +problem more complicated: +• Large Intra-class Variation. In human parsing, objects with +large visual appearance gaps may share the same semantic +categories. For example, “upper clothes” is an abstract con- +cept without strict visual constraints. Many kinds of objects +of color, texture, and shape belong to this category, leading to +significant intra-class variations. Further challenges may be +added by illumination changes, different viewpoints, noise +corruption, low-image resolution, and filtering distortion. +Large intra-class variations will increase the difficulty of +classifier learning decision boundaries, resulting in semantic +inconsistency in prediction. +• Unconstrained Poses. In the earlier human parsing bench- +marks [1], [14], [25], [37], the data is usually collected from +fashion media. From them people often stand or have a +limited number of simple pose. However, in the wild, human +pose is unconstrained, showing great diversity. Therefore, +more and more studies begin to pay attention to real-world +human parsing. Unconstrained poses will increase the state +space of target geometrically, which brings great challenges to +the human semantic representations. Moreover, the left-right +discrimination problem in human parsing is widespread +(e.g., left-arm vs right-arm, left-leg vs right-leg), and it is +also severely affected by unconstrained poses [44], [49], [57]. +• Occlusion. Occlusion mainly presents two modes: (1) +occlusion between humans and objects; (2) occlusion be- +tween humans. The former will destroy the continuity of +human parts or clothing, resulting in incomplete apparent +information of the targets, forming local semantic loss, and + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +3 +easily causing ambiguity [37], [39]. The latter is a more +severe challenge. In addition to continuity destruction, it +often causes foreground confusion. In human parsing, only +the occluded target human is regarded as the foreground, +while the others are regarded as the background. However, +they have similar appearance, making it difficult to determine +which part belongs to the foreground [58]. +Remark. In addition to the above challenges, some scenario- +based challenges also hinder the progress of human parsing, +such as the trade-off between inference efficiency and accu- +racy in crowded scenes, motion blur, and camera position +changes in movement scenes. +2.2 +Human Parsing Taxonomy +According to the characteristics (number of humans, data +modal) of the input space X , human parsing can be cate- +gorized into three sub-tasks (see Figure 1): single human +parsing, multiple human parsing, and video human parsing. +• Single Human Parsing (SHP). SHP is the cornerstone +of human parsing, which assumes that there is only one +foreground human instance in the image. Therefore, y just +contains corresponding semantic category supervision at +the pixel-level. Simple and straightforward task definitions +make most related research focus on how to model robust +and generalized human parts relationship. In addition to +being the cornerstone of human parsing, SHP is also often +used as an auxiliary supervision for some tasks, e.g., person +re-identification, human mesh reconstruction, virtual try-on. +• Multiple Human Parsing (MHP). Multiple human pars- +ing, also known as instance-level human parsing, aims to +parse multiple human instances in a single pass. Besides +category information, y also provides instance supervision +in pixel-level, i.e., the person identity of each pixel. The core +problems of MHP are how to discriminate different human +instances and how to learn each human feature in crowded +scenes comprehensively. In addition, inference efficiency is +also an important concern of MHP. Ideally, inference should +be real-time and independent of human instance numbers. +Except as an independent task, MHP sometimes is jointed +with other human visual understanding tasks in a multi-task +learning manner, such as pose estimation [59], [60], dense +pose [61] or panoptic segmentation [62]. +• Video Human Parsing (VHP). VHP needs to parse every +human in the video data, which can be regarded as a complex +visual task integrating video segmentation and image-level +human parsing. The current VHP studies mainly adopt +the unsupervised video object segmentation settings [63], +i.e., y is unknown in the training stage, and the ground- +truth of the first frame is given in the inference stage. +The temporal correspondence will only be approximated +according to x. Relative to SHP and MHP, VHP faces more +challenges that are inevitable in video segmentation settings, +e.g., motion blur and camera position changes. Benefitting by +the gradual popularity of video data, VHP has a wide range +of application potential, and the typical cases are intelligent +monitoring and video editing. +Remark. Over recent years, some potential research di- +rections have also received attention, including weakly- +supervised human parsing [48], [51], [64], one-shot human +parsing [65], [66] and interactive human parsing [67]. +2.3 +Relevant Tasks +Among the research in computer vision, there are some tasks +with strong relevance to human parsing, which are briefly +described in the following. +• Pose Estimation. The purpose of pose estimation is to +locate human parts and build body representations (such +as skeletons) from input data. Human parsing and pose +estimation share the same input space X , but there are +some differences in the supervision targets. The most crucial +difference is that human parsing is a dense prediction task, +which needs to predict the category of each pixel. Meanwhile, +pose estimation is a sparse prediction task, only focusing on +the location of a limited number of keypoints. These two +tasks are also often presented in multi-task learning, or one +of them is used as a guiding condition for the other. For +example, human parsing as a guide can help pose estimation +to reduce the impact of clothing on human appearance [19]. +• Image Segmentation. Image segmentation is a fundamen- +tal topic in image processing and computer vision. It mainly +includes semantic segmentation and instance segmentation. +As a basic visual task, there are many research directions +can be regarded as branches, and human parsing is one +of them. In the pre-deep learning era, image segmentation +focuses on the continuity of color, texture, and edge, while +human parsing pays more attention to the body topology +modeling. In the deep learning era, the methods in two fields +show more similarities. However, more and more human +parsing literature choose to model the parts relationship as +the goal, which is significantly different from the general goal +of image segmentation. Therefore, human parsing and image +segmentation are closely related but independent problems. +Remark. Ordinarily, most human-centric dense prediction +task show positively relevance with human parsing, e.g., +human matting [68], [69], human mesh reconstruction [70], +[71] and face/hand parsing [72], [73]. +2.4 +Applications of Human Parsing +As a crucial task in computer vision, there are a large number +of applications based on human parsing. We will introduce +some common ones below. +• Dense Pose Estimation. The goal of dense pose estimation +is to map all human pixels in an RGB image to the 3D surface +of the human body [74]. Human parsing is an important pre- +condition that can constrain the mapping of dense points. +At present, the mainstream dense pose estimation methods +explicitly integrate human parsing supervision, such as +DensePose R-CNN [74], Parsing R-CNN [61], and SimPose +[75]. Therefore, the performance of human parsing will +directly affect dense pose estimation results. +• Person Re-identification. Person re-identification seeks to +predict whether two images from different cameras belong to +the same person. The apparent characteristics of human body +is an important factor affecting the accuracy. Human parsing +can provide pixel-level semantic information, helping re- +identification models perceive the position and composition +of human parts/clothing. Various studies have introduced +human parsing explicitly or implicitly into re-identification +methods, which improves the model performance in multiple +aspects, e.g., local visual cues [76], [77], spatial alignment [78]– +[80], background-bias elimination [81], domain adaptation +[82], clothes changing [83], [84]. + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +4 +2016 +2017 +(TMM2016) +SYSU-Clothes +2018 +2019 +2022 +2012~2015 +(CVPR2017) +LIP +(ArXiv2017) +MHP-v1.0 +(CVPR2018) +COCO-DP +(ECCV2018) +CIHP +(MM2018) +VIP +(MM2018) +MHP-v2.0 +(AAAI2016) +AOG +(CVPR2016) +Attention +(CVPR2016) +LG-LSTM +(ECCV2016) +Graph-LSTM +(ECCV2016) +HAZN +(TMM2016) +SYSU-Clothes +(CVPR2017) +Struc-LSTM +(CVPR2017) +SSL +(CVPR2017) +Joint +(BMVC2017) +Holistic +(AAAI2018) +ProCNet +(AAAI2018) +AFLA +(CVPR2018) +WSHP +(MM2018) +TGPNet +(ECCV2018) +MuLA +(ECCV2018) +MMAN +(TPAMI2018) +JPPNet +(ECCV2018) +PGN +(MM2018) +ATEN +(AAAI2019) +CE2P +(CVPR2019) +Graphonomy +(ICCV2019) +CNIF +(ICCV2019) +BSANet +(ICCV2019) +SPGNet +(MM2019) +BraidNet +(CVPR2019) +Parsing R-CNN +(BMVC2019) +Unified +(CVPR2019) +TimeCycle +(NeurIPS2019) +UVC +(CVPR2012) +Fashionista +(TMM2013) +CFPD +(ICCV2015) +Chictopia10k +(TPAMI2015) +ATR +(CVPR2014) +PPP +(ICCV2013) +PPSS +(ICCV2013) +DailyPhotos +(CVPR2012) +Yamaguchi +(ICCV2013) +DMPM +(ICCV2013) +PaperDoll +(TMM2013) +CFPD +(CVPR2014) +HPM +(CVPR2015) +M-CNN +(TMM2015) +FPVC +(ICCV2015) +Co-CNN +(TPAMI2015) +ATR +2020 +2021 +(CVPRW2021) +HRHP +(AAAI2020) +Grapy-ML +(CVPR2020) +HHP +(CVPR2020) +SLRS +(CVPR2020) +CorrPM +(ECCV2020) +SemaTree +(ECCV2020) +BGNet +(TPAMI2020) +SCHP +(IJCV2020) +NAN +(CVPR2020) +PCNet +(MM2020) +DTCF +(ECCV2020) +OCR +(TPAMI2020) +HRNet +(ECCV2020) +RP R-CNN +(NeurIPS2020) +CRW +(CVPR2022) +CDGNet +(TMM2022) +PRM +(TPAMI2022) +PADNet +(TIP2022) +AIParsing +(CVPR2022) +LIIR +(CVPR2022) +SCC +(ArXiv2022) +UVC+ +(AAAI2021) +HIPN +(AAAI2021) +POPNet +(ICCV2021) +MCIBI +(ICCV2021) +NPPNet +(TPAMI2021) +PRHP +(AAAI2021) +ContrastCorr +(ICCV2021) +VFS +(ICCV2021) +ISNet +(TPAMI2021) +HTCorrM +(CVPR2021) +MGHR +(CVPR2021) +CLTC +(ICCV2021) +JSTG +Single Human Parsing (SHP) +Multiple Human Parsing (MHP) +Video Human Parsing (VHP) +Fig. 3: Timeline of representative human parsing works from 2012 to 2022. The upper part represents the datasets of human +parsing (§4), and the lower part represents the models of human parsing (§3). +• Virtual Try-on. Virtual try-on is a burgeoning and interest- +ing application in the vision and graphic communities [85]– +[92]. Most of the research follows the three processes: human +parsing, appearance generation, and refinement. Therefore, +human parsing is a necessary step to obtain clothing masks, +appearance constraints and pose maintenance. Recently, +some work began to study the parser-free virtual try-on +[93]–[95]. Through teacher-student learning, parsing-based +pre-training, and other technologies, the virtual try-on can be +realized without the human parsing map during inference. +However, most of these works still introduced the parsing +results during training, and the generation quality retains +gap from parser-based methods. +• Conditional Human Image Generation. Image gener- +ation/synthesis as a field has seen a lot of progress in +recent years [96]–[99]. Non-existent but fidelity images +can be created in large quantities. Among them, human +image generation has attracted attention because of its rich +downstream applications. Compared with unconditional gen- +eration, conditional generation can produce corresponding +output as needed, and human parsing map is one of the +most widely used pre-conditions. There have been a lot of +excellent works on parsing-based conditional human image +generation, e.g., CPFNet [100] and InsetGAN [101]. +Remark. Besides the above cases, in general, most of the +human-centric generation applications can be built with the +help of human parsing, e.g., deepfakes [102], [103], style +transfer [104]–[106], clothing editing [107]–[109]. +3 +DEEP LEARNING BASED HUMAN PARSING +The existing human parsing can be categorized into three +sub-tasks: single human parsing, multiple human parsing, +and video human parsing, focusing on parts relationship +modeling, human instance discrimination, and temporal +correspondence learning, respectively. According to this +taxonomy, we sort out the representative works (lower part +of Figure 3) and review them in detail below. +3.1 +Single Human Parsing (SHP) Models +SHP considers extracting human features through parts +relationship modeling. According to the modeling strategy, +SHP models can be divided into three main classes: context +learning, structured representation, and multi-task learning. +Moreover, considering some special but interesting methods, +we will review them as “other modeling models”. Table 1 +summarizes the characteristics for reviewed SHP models. +3.1.1 +Context Learning +Context learning, a mainstream paradigm for single human +parsing, seeks to learn the connection between local and +global features to model human parts relationship. Recent +studies have developed various context learning methods to +handle single human parsing, including attention mechanism +and scale-aware features. +• Attention Mechanism. The first initiative was proposed in +[33] that applies an attention mechanism for parts relation- +ship modeling. Specifically, soft weights, learned by attention +mechanism, are used to weight different scale features and +merge them. At almost the same time, LG-LSTM [34], Graph- +LSTM [113] and Struc-LSTM [115] exploit complex local +and global context information through Long Short-Term +Memory (LSTM) [132] and achieve very competitive results. +Then, [36] proposes a Semantic Prediction Guidance (SPG) +module that learns to re-weight the local features through +the guidance from pixel-wise semantic prediction. With +the rise of graph model, researchers realized that attention +mechanism is able to establish the correlation between +graph model nodes. For example, [121] introduces Graph + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +5 +TABLE 1: Summary of essential characteristics for reviewed SHP models (§3.1). The training datasets and whether it is open +source are also listed. See §4 for more detailed descriptions of datasets. These notes also apply to the other tables. +Context +Structured +Multi-task +Others +Year +Method +Pub. +Attention Scale-aware Tree Graph Edge Pose Denoising Adversarial +Datasets +Open Source +2012 +Yamaguchi [1] +CVPR +- +- +- +- +- +✓ +- +- +FS +- +2013 +DMPM [14] +ICCV +- +- +✓ +- +- +- +- +- +FS/DP +- +PaperDoll [20] +ICCV +- +- +- +- +- +✓ +- +- +FS +- +CFPD [25] +TMM +- +- +- +- +- +✓ +- +- +CFPD +- +2014 +HPM [17] +CVPR +- +- +✓ +- +- +✓ +- +- +FS/DP +- +2015 +M-CNN [110] +CVPR +- +- +- +✓ +- +- +- +- +ATR +- +Co-CNN [37] +ICCV +- +✓ +- +- +- +- +- +- +FS/ATR +- +FPVC [111] +TMM +- +- +- +- +- +✓ +- +- +FS/DP +- +ATR [22] TPAMI +- +- +✓ +- +- +- +- +- +FS/DP +- +2016 +AOG [112] +AAAI +- +- +✓ +- +- +✓ +- +- +- +- +Attention [33] +CVPR +✓ +✓ +- +- +- +- +- +- +PPP +✓ +LG-LSTM [34] +CVPR +✓ +- +- +- +- +- +- +- +FS/ATR/PPP +- +Graph-LSTM [113] +ECCV +✓ +- +- +✓ +- +- +- +- +FS/ATR/PPP +- +HAZN [38] +ECCV +- +✓ +- +- +- +- +- +- +PPP +- +SYSU-Clothes [114] +TMM +- +- +- +✓ +- +- +- +- +SYSU-Clothes +- +2017 +Struc-LSTM [115] +CVPR +✓ +- +✓ +✓ +- +- +- +- +ATR/PPP +- +SSL [116] +CVPR +- +- +- +- +- +✓ +- +- +LIP/PPP +✓ +Joint [117] +CVPR +- +- +- +- +- +✓ +- +- +PPP +- +2018 +ProCNet [118] +AAAI +- +- +✓ +- +- +- +- +- +PPP +- +AFLA [49] +AAAI +- +- +- +- +- +- +- +✓ +LIP +- +WSHP [64] +CVPR +- +- +- +- +- +✓ +- +- +PPP +- +TGPNet [119] +MM +- +✓ +- +- +- +- +- +- +ATR +✓ +MuLA [47] +ECCV +- +- +- +- +- +✓ +- +- +LIP/PPP +- +MMAN [50] +ECCV +- +- +- +- +- +- +✓ +LIP/PPP/PPSS +✓ +JPPNet [2] TPAMI +- +- +- +- +- +✓ +- +- +LIP/PPP +✓ +2019 +CE2P [44] +AAAI +- +✓ +- +- +✓ +- +- +- +LIP +✓ +Graphonomy [42] +CVPR +- +- +✓ +✓ +- +- +- +- +ATR/PPP +✓ +CNIF [3] +ICCV +- +- +✓ +- +- +- +- +- +ATR/LIP/PPP +✓ +BSANet [120] +ICCV +- +✓ +- +- +✓ +- +- +- +PPP +- +SPGNet [36] +ICCV +✓ +- +- +- +- +- +- +- +PPP +- +BraidNet [57] +MM +- +✓ +- +- +- +- +- +- +LIP +- +2020 +Grapy-ML [121] +AAAI +✓ +- +✓ +✓ +- +- +- +- +ATR/PPP +✓ +HHP [4] +CVPR +- +- +✓ +✓ +- +- +- +- +ATR/LIP/PPP/PPSS +✓ +SLRS [51] +CVPR +- +- +- +✓ +✓ +- +✓ +- +ATR/LIP +- +PCNet [39] +CVPR +- +✓ +- +✓ +- +- +- +- +LIP/PPP +- +CorrPM [45] +CVPR +- +- +- +- +✓ +✓ +- +- +ATR/LIP +✓ +DTCF [46] +MM +- +✓ +- +- +✓ +- +- +- +LIP/PPP +- +SemaTree [41] +ECCV +✓ +- +✓ +- +- +- +- +- +LIP +✓ +OCR [122] +ECCV +✓ +✓ +- +- +- +- +- +- +LIP +✓ +BGNet [123] +ECCV +- +- +✓ +✓ +- +- +- +- +LIP/PPP/PPSS +- +HRNet [124] TPAMI +- +✓ +- +- +- +- +- +- +LIP +✓ +SCHP [52] TPAMI +- +- +- +- +✓ +- +✓ +- +ATR/LIP/PPP +✓ +2021 +HIPN [125] +AAAI +- +- +- +- +- +- +✓ +- +LIP/PPP +- +POPNet [65] +AAAI +✓ +- +- +- +- +- +- +- +ATR-OS +✓ +MCIBI [126] +ICCV +✓ +- +- +- +- +- +- +- +LIP +✓ +ISNet [127] +ICCV +✓ +- +- +- +- +- +- +- +LIP +✓ +NPPNet [128] +ICCV +- +✓ +- +- +- +✓ +- +- +LIP/PPP +✓ +HTCorrM [129] TPAMI +✓ +- +- +- +✓ +✓ +- +- +ATR/LIP +- +PRHP [130] TPAMI +- +- +✓ +✓ +- +- +- +- +ATR/LIP/PPP/PPSS +✓ +2022 +CDGNet [131] +CVPR +✓ +- +- +- +✓ +- +- +- +ATR/LIP +✓ +HSSN [5] +CVPR +- +✓ +✓ +- +- +- +- +- +LIP/PPP +✓ +PRM [43] +TMM +- +- +- +✓ +- +- +- +- +LIP/PPP +- +PADNet [48] TPAMI +- +- +- +- +- +✓ +- +- +PPP +- +Pyramid Mutual Learning (Grapy-ML) to address the cross- +dataset human parsing problem, in which the self-attention +is used to model the correlations between context nodes. +Although attention mechanisms have achieved great results +in previous work, global context dependency cannot be fully +understood due to the lack of explicit prior supervision. +CDGNet [131] adopts the human parsing labels accumulated +in the horizontal and vertical directions as the supervisions, +aiming to learn the position distribution of human parts, +and weighting them to the global features through attention +mechanism to achieve accurate parts relationship modeling. +• Scale-aware Features. The most intuitive context learning +method is to directly use scale-aware features (e.g. multi-scale +features [133], [134], features pyramid networks [135], [136]), +which has been widely verified in semantic segmentation +[56]. The earliest effort can be tracked back to CoCNN [37]. It +integrates cross layer context, global image-level context, +super-pixel context, and cross super-pixel neighborhood +context into a unified architecture, which solves the obstacle +of low-resolution features in FCN [31] for modeling parts +relationship. Subsequently, [38] proposes Hierarchical Auto- +Zoom Net (HAZN), which adaptively zooms predicted +image regions into their proper scales to refine the parsing. +TGPNet [119] considers that the label fragmentation and +complex annotation in human parsing datasets is a non- +negligible problem to hinder accurate parts relationship +modeling, trying to alleviate this limitation by supervising +multi-scale context information. PCNet [39] further studies +the adaptive contextual features, and captures the represen- +tative global context by mining the associated semantics of +human parts through proposed part class module, relational +aggregation module, and relational dispersion module. +3.1.2 +Structured Representation +The purpose of structured representation is to learn the +inherent combination or decomposition mode of human +parts, so as to model parts relationship. Research efforts in +this field are mainly made along two directions: using a tree +structure to represent the hierarchical relationship between +body and parts, and using a graph structure to represent the +connectivity relationship between different parts. These two + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +6 +ideas are complementary to each other, so they have often +been adopted simultaneously in some recent work. +• Tree Structure. DMPM [14] and HPM [17] solve the single +human parsing issue by using the parselets representation, +which construct a group of parsable segments by low-level +over-segmentation algorithms, and represent these segments +as leaf nodes, then search for the best graph configuration +to obtain semantic human parsing results. Similarly, [22] +formulates human parsing as an Active Template Regression +(ATR) problem, where each human part is represented as the +linear combination of learned mask templates and morphed +to a more precise mask with the active shape parameters. +Then the human parsing results are generated from the mask +template coefficients and the active shape parameters. In the +same line of work, ProCNet [118] deals with human parsing +as a progressive recognition task, modeling structured parts +relationship by locating the whole body and then segmenting +hierarchical components gradually. CNIF [3] further extends +the human tree structure and represents human body as +a hierarchy of multi-level semantic parts, treating human +parsing as a multi-source information fusion process. A more +efficient solution is developed in [41], which uses a tree +structure to encode human physiological composition, then +designs a coarse to fine process in a cascade manner to +generate accurate parsing results. +• Graph Structure. Graph structure is an excellent re- +lationship modeling method. Some researchers consider +introducing it into human parsing networks for part-relation +reasoning. A clothing co-parsing system is designed by +[114], which takes the segmented regions as the vertices. +It incorporates several contexts of clothing configuration +to build a multi-image graphical model. To address the +cross-dataset human parsing problem, Graphonomy [42] +proposes a universal human parsing agent, introducing +hierarchical graph transfer learning to encode the underlying +label semantic elements and propagate relevant semantic +information. BGNet [123] hopes to improve the accuracy +of human parsing in similar or cluttered scenes through +graph structure. It exploits the human inherent hierarchical +structure and the relationship between different human parts +employing grammar rules in both cascaded and paralleled +manner to correct the segmentation performance of easily +confused human parts. A landmark work on this line was +proposed by Wang et al. [4], [130]. A hierarchical human +parser (HHP) is constructed, representing the hierarchical +human structure by three kinds of part relations: decomposi- +tion, composition, and dependency. Besides, HHP uses the +prism of a message-passing, feed-back inference scheme to +reason the human structure effectively. Following this idea, +[43] proposes Part-aware Relation Modeling (PRM) to handle +human parsing, generating features with adaptive context +for various sizes and shapes of human parts. +3.1.3 +Multi-task Learning +The auxiliary supervisions can help the parser better under- +stand the relationship between parts, such as part edges or +human pose. Therefore, multi-task learning has become an +essential paradigm for single human parsing. +• Edge-aware Learning. Edge information is implicit in the +human parsing dataset. Thus edge-aware supervision or +feature can be introduced into the human parser without +additional labeling costs. In particular, edge-aware learning +can enhance the model’s ability to discriminate adjacent +parts and improve the fineness of part boundaries. The +typical work is [44], which proposes a Context Embedding +with Edge Perceiving (CE2P) framework, using an edge +perceiving module to integrate the characteristic of object +contour to refine the part boundaries. Because of its excellent +performance and scalability, CE2P has become the baseline +for many subsequent works. CorrPM [45] and HTCorrM +[129] are built on CE2P, and further use part edges to help +model the parts relationship. They construct a heterogeneous +non-local module to mix the edge, pose and semantic features +into a hybrid representation, and explore the spatial affinity +between the hybrid representation and the parsing feature +map at all positions. BSANet [120] considers that edge +information is helpful to eliminate the part-level ambiguities +and proposes a joint parsing framework with boundary +and semantic awareness to address this issue. Specifically, a +boundary-aware module is employed to make intermediate- +level features focus on part boundaries for accurate localiza- +tion, which is then fused with high-level features for efficient +part recognition. To further enrich the edge-aware features, +a dual-task cascaded framework (DTCF) is developed in +[46], which implicitly integrates parsing and edge features to +refine the human parsing results progressively. +• Pose-aware Learning. Both human parsing and pose +estimation seek to predict dense and structured human +representation. There is a high intrinsic relationship between +them. Therefore, some studies have tried to use pose-aware +learning to assist in parts relationship modeling. As early +as 2012, Yamaguchi et al. [1], [20] exploited the relationship +between clothing and the underlying body pose, exploring +techniques to accurately parse person wearing clothing into +their constituent garment pieces. Almost immediately, Liu +et al. [25] combined the human pose estimation module +with an MRF-based color/category inference module and +a super-pixel category classifier module to parse fashion +items in images. Subsequently, Liu et al. [111] extends this +idea to semi-supervised human parsing, collecting a large +number of unlabeled videos, using cross-frame context +for human pose co-estimation, and then performs video +joint human parsing. SSL [116] and JPPNet [2] choose to +impose human pose structures into parsing results without +resorting to extra supervision, and adopt the multi-task +learning manner to explore efficient human parts relationship +modeling. A similar work is developed by [47], which +presents a Mutual Learning to Adapt model (MuLA) for +joint human parsing and pose estimation. MuLA can fast +adjust the parsing and pose models to provide more robust +and accurate results by incorporating information from +corresponding models. Different from the above work, Zeng +et al. [128]. focus on how to automatically design a unified +model and perform two tasks simultaneously to benefit each +other. Inspired by NAS [137], they propose to search for +an efficient network architecture (NPPNet), searching the +encoder-decoder architectures respectively, and embed NAS +units in both multi-scale feature interaction and high-level +feature fusion. To get rid of annotating pixel-wise human +parts masks, a weakly-supervised human parsing approach is +proposed by PADNet [48]. They develop an iterative training +framework to transform pose knowledge into part priors, + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +7 +TABLE 2: Highlights of parts relationship modeling methods +for SHP models (§3.1). Representative Works of each method +are also give. +Method +Representative +Works +Highlights +Attention +[33], [34], [113] +[36], [115], [121] +[131] +It is helpful to locate interested human parts, +suppress useless background information. +Scale-aware +[37], [38], [119] +[39] +Fusion low-level texture and high-level semantic +features, help to parse small human parts. +Tree +[14], [17], [22] +[3], [41], [118] +Simulate the composition and decomposition +relationship between human parts and body. +Graph +[42], [114], [123] +[4], [43], [130] +Modeling the correlation and difference between +human parts. +Edge +[44], [45], [129] +[46], [120] +Solve the pixel confusion problem on the boundary +of adjacent parts, generating finer boundary. +Pose +[1], [20], [25] +[2], [111], [116] +[47], [48], [128] +As context clues to improve semantic consistency +between parsing results and body structure. +Denoising +[51], [52], [125] +Alleviate the impact of super-pixel or annotation +errors, improving the robustness. +Adversarial +[49], [50] +Reduce the domain differences between training +data and testing data, improving the generalization. +so that only pose annotations are required during training, +greatly alleviating the annotation burdens. +3.1.4 +Other Modeling Models +Other works attempt to employ techniques outside of the +above taxonomy, such as denoising and adversarial learning, +which also make specific contributions to the human parts +relationship modeling and deserve a separate look. +• Denoising. To reduce the labeling cost, there is a large +amount of noise in the mainstream SHP datasets [22], [116], +so denoising learning for accurate human parts relationship +modeling has also received some attention. SCHP [52] is +the most representative work. It starts with using inaccurate +parsing labels as the initialization and designs a cyclically +learning scheduler to infer more reliable pseudo labels In the +same period, Li et al. [51] attempt to combine denoising learn- +ing and semi-supervised learning, proposing Self-Learning +with Rectification (SLR) strategy for human parsing. SLR +generates pseudo labels for unlabeled data to retrain the +parsing model and introduces a trainable graph reasoning +method to correct typical errors in pseudo labels. Based +on SLR, HIPN [125] further explores to combine denoising +learning with semi-supervised learning, which develops the +noise-tolerant hybrid learning, taking advantage of positive +and negative learning to better handle noisy pseudo labels. +• Adversarial Learning. Earlier, inspired by the Generative +Adversarial Nets (GAN) [96], a few works use adversarial +learning to solve problems in parts relationship modeling. +For example, to solve the domain adaptation problem, +AFLA [49] proposes a cross-domain human parsing network, +introducing a discriminative feature adversarial network and +a structured label adversarial network to eliminate cross- +domain differences in visual appearance and environment +conditions. MMAN [50] hopes to solve the problem of +low-level local and high-level semantic inconsistency in +pixel-wise classification loss. It contains two discriminators: +Macro D, acting on low-resolution label map and penalizing +semantic inconsistency; Micro D, focusing on high-resolution +label map and restraining local inconsistency. +Remark. In fact, many single human parsing models use a +variety of parts relationship modeling methods. Therefore, +our above taxonomy only introduces the core methods of +TABLE 3: Summary of essential characteristics for reviewed +MHP models (§3.2). “BU” indicates bottom-up; “1S-TD” in- +dicates one-stage top-down; “2S-TD” indicates two-stage top- +down. +Year +Method +Pub. +Pipeline +Datasets +Open Source +2017 +Holistic [138] BMVC +1S-TD +PPP +- +2018 +PGN [8] +ECCV +BU +PPP/CIHP +✓ +2019 +CE2P [44] +AAAI +2S-TD +CIHP/MHP-v2.0 +✓ +Parsing R-CNN [61] +CVPR +1S-TD +CIHP/MHP-v2.0 +✓ +BraidNet [57] +MM +2S-TD +CIHP +- +Unified [139] BMVC +1S-TD +PPP/CIHP +- +2020 +RP R-CNN [140] +ECCV +1S-TD +CIHP/MHP-v2.0 +✓ +SemaTree [41] +ECCV +2S-TD +CIHP/MHP-v2.0 +✓ +NAN [141] +IJCV +BU +MHP-v1.0/MHP-v2.0 +✓ +SCHP [52] TPAMI +2S-TD +CIHP/MHP-v2.0/VIP +✓ +2021 +MGHR [59] +CVPR +BU +PPP/MHP-v2.0 +/COCO-DP +✓ +2022 +AIParsing [142] +TIP +1S-TD +CIHP/MHP-v2.0/VIP +- +each model. Table 2 summarizes the highlights of each parts +relationship modeling method. +3.2 +Multiple Human Parsing (MHP) Models +MHP seeks to locate and parse each human in the image +plane. The task setting is similar to instance segmentation, +so it is also called instance-level human parsing. We divide +MHP into three paradigms: bottom-up, one-stage top-down, +and two-stage top-down, according to its pipeline of dis- +criminating human instances. The essential characteristics of +reviewed MHP models are illustrated in Table 3. +• Bottom-up. Bottom-up paradigm regards multiple human +parsing as a fine-grained semantic segmentation task, which +predicts the category of each pixel and grouping them into +corresponding human instance. In a seminal work [8], Gong +et al. propose a detection-free Part Grouping Network (PGN) +that reformulates multiple human parsing as two twinned +sub-tasks (semantic part segmentation and instance-aware +edge detection) that can be jointly learned and mutually +refined via a unified network. Among them, instance-aware +edge detection task can group semantic parts into distinct +human instances. Then, NAN [141] proposes a deep Nested +Adversarial Network for multiple human parsing. NAN +consists of three GAN-like sub-nets, performing semantic +saliency prediction, instance-agnostic parsing, and instance- +aware clustering, respectively. Recently, Zhou et al. [59] +propose a new bottom-up regime to learn category-level +multiple human parsing as well as pose estimation in a joint +and end-to-end manner, called Multi-Granularity Human +Representation (MGHR) learning. MGHR exploits structural +information over different human granularities, transforming +the difficult pixel grouping problem into an easier multi +human joint assembling task to simplify the difficulty of +human instances discrimination. +• One-stage Top-down. One-stage top-down is the main- +stream paradigm of multiple human parsing. It first locates +each human instance in the image plane, then segments each +human part in an end-to-end manner. An early attempt is +Holistic [138], which consists of a human detection network +and a part semantic segmentation network, then passing +the results of both networks to an instance CRF [143] to +perform multiple human parsing. Inspired by Mask R-CNN +[144], Qin et al. [139] propose a top-down unified framework +that simultaneously performs human detection and single +human parsing, identifying instances and parsing human +parts in crowded scenes. A milestone one-stage top-down + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +8 +TABLE 4: Highlights of human instances discrimination meth- +ods for MHP models (§3.2). Representative Works of each +method are also give. +Method +Representative +Works +Highlights +Bottom-up +[8], [59], [141] +Good model efficiency, good accuracy on pixel-wise +segmentation, and poor accuracy on instances +discrimination. +One-stage +Top-down +[61], [138], [139] +[140], [142] +Better trade-off between model efficiency and accuracy. +But pixel-wise segmentation, especially the part +boundary is not fine enough. +Two-stage +Top-down +[41], [44], [57] +[52] +Good accuracy and poor efficiency, the model inference +time is proportional to human instances number. +multiple human parsing model is proposed by Yang et al., +that enhances Mask R-CNN in all aspects, and proposes +Parsing R-CNN [61] network, greatly improving the accuracy +of multiple human parsing concisely. Subsequently, Yang +et al. propose an improved version of Parsing R-CNN, +called RP R-CNN [140], which introduces a global semantic +enhanced feature pyramid network and a parsing re-scoring +network into the high-performance pipeline, achieving better +performance. Later, AIParsing [142] introduces the anchor- +free detector [145] into the one-stage top-down paradigm +for discriminating human instances, avoiding the hyper- +parameters sensitivity caused by anchors. +• Two-stage Top-down. One-stage top-down and two-stage +top-down paradigms are basically the same in operation +flow. The difference between them is whether the detector +is trained together with the segmentation sub-network in an +end-to-end manner. All the two-stage bottom-up multiple +human parsing methods consist of a human detector and +a single human parser. The earliest attempt is CE2P [44], +which designs a framework called M-CE2P on CE2P and +Mask R-CNN, cropping the detected human instances, then +sending them to the single human parser, finally combining +the parsing results of all instances into a multiple human +parsing prediction. Subsequent works, e.g., BraidNet [57], +SemaTree [41], and SCHP [52], basically inherit this pipeline. +Remark. The advantage of bottom-up and one-stage top- +down is efficiency, and the advantage of two-stage top-down +is accuracy. But as a non-end-to-end pipeline, the inference +speed of two-stage top-down is positively correlated with the +number of human instances, which also limits its practical +application value. The detailed highlights of three human +instances discrimination methods are summarized in Table 4. +3.3 +Video Human Parsing (VHP) Models +Existing VHP studies mainly focus to propagate the first +frame into the entire video by the affinity matrix, which rep- +resents the temporal correspondences learnt from raw video +data. Considering the unsupervised learning paradigms, we +can group them into three classes: cycle-tracking, reconstruc- +tive learning, and contrastive learning. We summarize the +essential characteristics of reviewed VHP models in Table 5. +• Cycle-tracking. Early VHP methods model the unsuper- +vised learning target mainly by the cycle-consistency of +video frames, i.e., pixels/patches are expected to fall into the +same locations after a cycle of forward-backward tracking. +ATEN [9] first leverages convolutional gated recurrent units +to encode temporal feature-level changes, optical flow of +non-key frames is wrapped with the temporal memory to +generate their features. TimeCycle [146] tracks the reference +patch backward-forward in the video. The reference and the +TABLE 5: Summary of essential characteristics for reviewed +VHP models (§3.3). “Cycle.” indicates cycle-tracking; “Recons.” +indicates reconstructive learning; “Contra.” indicates contrastive +learning. All models are test on the VIP dataset. +Year +Method +Pub. +Cycle. Recons. Contra. +Open Source +2018 +ATEN [9] +MM +✓ +- +- +✓ +2019 +TimeCycle [146] +CVPR +✓ +- +- +✓ +UVC [147] NeurIPS +✓ +✓ +- +✓ +2020 +CRW [148] NeurIPS +✓ +- +- +✓ +2021 +ContrastCorr [149] +AAAI +✓ +✓ +✓ +CLTC [150] +CVPR +- +- +✓ +- +VFS [151] +ICCV +- +- +✓ +✓ +JSTG [152] +ICCV +✓ +- +✓ +- +2022 +LIIR [153] +CVPR +- +✓ +- +✓ +SCC [154] +CVPR +✓ +- +✓ +- +UVC+ [155] +ArXiv +✓ +✓ +✓ +- +TABLE 6: Highlights of temporal correspondences learning +methods for VHP models (§3.3). Representative Works of each +method are also give. +Method +Representative +Works +Highlights +Cycle- +tracking +[146], [147] +[148], [155] +Capturing temporal variations, may produce +wrong correspondences when occlusion occurs. +Reconstructive +Learning +[149], [153] +Modelling fine-grained temporal correspondence +and guiding focus on part details. +Contrastive +Learning +[150], [151] +[152], [154] +Search for discriminative features to segment +similar or position-transformed human instances. +tracked patch at the end of the tracking cycle are considered +to be consistent both in spatial coordinates and feature +representation. Meanwhile, UVC [147] performs the region- +level tracking and pixel-level corresponding with a shared +affinity matrix, the tracked patch feature and the region- +corresponding sub-affinity matrix are used to reconstruct the +reference patch. Roles of the target and reference patches +are then switched to regularizing the affinity matrix as +orthogonal, which satisfies the cycle-consistency constraint. +Its later version, UVC+ [155] combines features learned by +image-based tasks with video-based counterparts to further +boost the performance. Lately, CRW [148] represents video +as a graph, where nodes are patches and edges are affinities +between nodes in adjacent frames. A cross-entropy loss +guides a graph walk to track the initial node bi-directionally +in feature space, which is considered the target node after a +bunch of cycle paths. However, the cycle-consistency in [146], +[148] strictly assumes that the target patch preserves visible +in consecutive frames. Once it is occluded or disappears, the +correspondences will be incorrectly assigned, thus leaving +an optimal transport problem between video frames. +• Reconstructive Learning. As video contents smoothly +shift in time, pixels in a “query” frame can be considered +as copies from a set of pixels in other reference frames +[156], [157]. Following UVC [147] to establish pixel-level +correspondence, several methods [149], [153] are proposed to +learn temporal correspondence completely by reconstructing +correlating frames. Subsequently, ContrastCorr [149] not +only learns from intra-video self-supervision, but also steps +further to introduce inter-video transformation as negative +correspondence. The inter-video distinction enforces the +feature extractor to learn discriminations between videos +while preserving the fine-grained matching characteristic +among intra-video frame pairs. Based on the intra-inter video +correlation, LIIR [153] introduces a locality-aware reconstruc- +tion framework, which encodes position information and +involves spatial compactness into intra-video correspondence + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +9 +Sin +gle + H +um +an + P +ars +in +g ( +SH +P) +Mu +lti +ple + H +um +an + P +ars +in +g ( +MH +P) +Vid +eo + H +um +an + P +ars +in +g ( +VH +P) +At +te +nt +io +n +Sc +al +e- +aw +ar +e +Tr +ee +Gr +ap +h +Ed +ge +Po +se +De +no +s. +Ad +ve +r. +BU +1S +-T +D +2S +-T +D +Cy +cle +. +Re +co +ns +. +Co +nt +ra +. +Fig. 4: Correlations of different SHP, MHP and VHP methods +(§3.4). We use the connections between the arc edges to sum- +mary the correlation between human parsing methods, each +connecting line stands for a study that uses both methods. The +longer the arc, the more methods of this kind, same for the +width of connecting lines. This correlation summary reveals the +prevalence of various human parsing methods. +learning, for locality-aware and efficient visual tracking. +• Contrastive Learning. Following the idea of pulling +positive pairs close together and pushing negative pairs +away from each other, considerable VHP algorithms adopt +contrastive learning as a training objective. To solve the +optimal transport problem, CLTC [150] proposes to mine +positive and semi-hard negative correspondences via con- +sistency estimation and dynamic hardness discrimination, +respectively. The well-defined positive and negative pixel +pairs prevent side-effects from non-consistent positive and +too hard/easy negative samples to contrastive learning. +Specifically, unlike most methods that perform patch-level +contrastive learning, VFS [151] learns visual correspondences +at frame level. Following data augmentation of image-level +contrastive learning [158] and a well-designed temporal +sampling strategy, VFS encourages convolutional features +to find correspondences between similar objects and parts. +Lately, [152], [154] extend the video graph with space +relations of neighbor nodes, which determine the aggre- +gation strength from intra-frame neighbors. The proposed +space-time graph draws more attention to the association +of center-neighbor pairs, thus explicitly helping learning +correspondence between part instances. SCC [154] mixes +sequential Bayesian filters to formulate the optimal paths +that track nodes from one frame to others, to alleviate the +correspondence missing caused by random occlusion. +Remark. To our investigation scope, the current VHP re- +search essentially follows an unsupervised semi-automatic +video object segmentation setup. But considering the po- +tential demand, it is more expectant to fully utilize the +annotations and solve the VHP problem through an instance- +discriminative manner, i.e., a fine-grained video instance seg- +Sin +gle + H +um +an + P +ars +in +g ( +SH +P) +Mu +lti +ple + H +um +an + P +ars +in +g ( +MH +P) +Vid +eo + H +um +an + P +ars +in +g ( +VH +P) +AOG +Attention +ATR +FPVC +Co-CNN +M-CNN +HPM +CFPD +DMPM +PaperDoll +Yamaguchi +LG-LSTM +Graph-LSTM +HAZN +SYSU-Clothes +Struc-LSTM +SSL +Joint +ProCNet +AFLA +WSHP +TGPNet +MuLA +MMAN +JPPNet +CE2P +Graphonomy +CNIF +BSANet +SPGNet +BraidNet +Grapy-ML +HHP +SLRS +PCNet +CorrPM +DTCF +SemaTree +OCR +BGNet +HRNet +SCHP +HIPN +POPNet +MCIBI +ISNet +NPPNet +HTCorrM +PRHP +CDGNet +PRM +PADNet +Holistic +PGN +Parsing R-CNN +Unified +RP R-CNN +NAN +MGHR +AIParsing +ATEN +TimeCycle +UVC +Crw +ContrastCorr +CLTC +VFS +JSTG +LIIR +SCC +UVC+ +Fig. 5: Correlations of different SHP, MHP and VHP studies +(§3.4). We list out all the involved human parsing studies by dots +and use connecting lines to represent their citing relations. The +citing relation here refers to the citation appears in experimental +comparisons, to avoid citations of low correlation in background +introduction. As each line represents a citation between two +studies, so the larger the dot, the more times cited. These +correlations highlight the relatively prominent studies. +mentation task. The highlights of temporal correspondences +learning methods for VHP are shown in Table 6. +3.4 +Summary +Through the detailed review, we have subdivided SHP, +MHP, and VHP studies into multiple methods and discussed +their characteristics. To further investigate the development +picture of the human parsing community, we summarize the +correlations of the methods in Figure 4 and correlations of +the involved studies in Figure 5, respectively. +Figure 4 presents correlations between research methods, +i.e., two methods are connected if a study uses both as its +technical components, making the length of arcs represent +the number of studies using them. The connecting line +distribution first obviously shows that Graph (Structure), +Attention (Mechanism), and Edge(-aware Learning) of SHP +are more correlated with multiple other methods, which +indicates their compatibility with others and prevalence in +the community. It is worth noting that though Tree (Structure) +has many correlations with others, a large proportion of +them are with Graph method. This phenomenon indicates +that Tree method is much less generalizable compared to +Graph, Attention, and Edge methods. Regrettably, negligible +relations between VHP and other methods show that current +VHP studies have not yet gone deep into parts relationship +modeling or human instance discrimination. +The correlations of human parsing studies are presented +in form of citing relations as Figure 5, each line represents a +citation between two studies. For reliable statistics, we only +consider citations that appear in experimental comparisons +for all studies. From the citing relations, we can easily observe + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +10 +that Attention [33], JPPNet [2], CE2P [44], CNIF [3] and PGN +[8] have the largest dots, i.e., they are experimental compared +by most other studies, this indicates they are recognized +as baseline studies of great prominence by the community. +Additionally, since CE2P proposed to handle MHP sub-task +by 2S-TD pipeline and make a milestone, lots of SHP studies +start to compare their algorithms with MHP studies, this +trend breaks down the barriers between the two sub-tasks +of human parsing. Lastly, similar to the method correlation, +VHP studies form citations strictly along with the proposed +order among their own, which once again shows that VHP +studies have not focused on human-centric data. +Synthesizing detailed review and correlation analysis, we +can draw some conclusions about the historical evolution +of human parsing models. First, the research focus has +gradually shifted from SHP to MHP and VHP. As more +challenging tasks, the latter two also have greater application +potential. With the emergence of high-quality annotated +datasets and the improvement of computing power, they +have received increasing attention. Secondly, the technical di- +versity is insufficient, and the achievements of representation +learning in recent years have not fully benefited the human +parsing field. Finally, the number of open source work has +increased significantly, but still insufficient. It is hoped that +subsequent researchers will open source code and models as +much as possible to benefit the follow-up researchers. +4 +HUMAN PARSING DATASETS +In the past decades, a variety of visual datasets have been +released for human parsing (upper part of Figure 3). We +summarize the classical and commonly used datasets in +Table 7, and give a detailed review from multiple angles. +4.1 +Single Human Parsing (SHP) Datasets +• Fashionista (FS) [1] consists of 685 photographs collected +from Chictopia.com, a social networking website for fash- +ion bloggers. There are 456 training images and 299 testing +images annotated with 56-class semantic labels, and text tags +of garment items and styling are also provided. Fashionista +was once the main single human/clothing parsing dataset +but was limited by its scale. It is rarely used now. +• Colorful Fashion Parsing Data (CFPD) [25] is also col- +lected from Chictopia.com, which provides 23-class noisy +semantic labels and 13-class color labels. The annotated im- +ages are usually grouped into 1,341/1,341 for train/test. +• DailyPhotos (DP) [14] contains 2,500 high resolution +images, which are crawled following the same strategy as +the Fashionista dataset and thoroughly annotated with 19 +categories. +• PPSS [159] includes 3,673 annotated samples collected from +171 videos of different surveillance scenes and provides pixel- +wise annotations for hair, face, upper-/lower-clothes, arm, +and leg. It presents diverse real-word challenges, e.g. pose +variations, illumination changes, and occlusions. There are +1,781 and 1,892 images for training and testing, respectively. +• ATR [22] contains data which combined from three small +benchmark datasets: the Fashionista [1] containing 685 +images, the CFPD [25] containing 2,682 images, and the +DailyPhotos [14] containing 2,500 images. The labels are +merged of Fashionista and CFPD datasets to 18 categories. +To enlarge the diversity, another 1,833 challenging images are +collected and annotated to construct the Human Parsing in +the Wild (HPW) dataset. The final combined dataset contains +7,700 images, which consists of 6,000 images for training, +1,000 for testing, and 700 as the validation set. +• Chictopia10k [37] contains 10,000 real-world human pic- +tures from Chictopia.com, annotating pixel-wise labels +following [22]. The dataset mainly contains images in the +wild (e.g., more challenging poses, occlusion, and clothes). +• SYSU-Clothes [114] consists of 2,098 high resolution +fashion photos in high-resolution (about 800×500 on average) +from the shopping website. In this dataset, six categories of +clothing attributes (e.g., clothing category, clothing color, +clothing length, clothing shape, collar shape, and sleeve +length) and 124 attribute types of all categories are collected. +• Look into Person (LIP) [116] is the most popular single +human parsing dataset, which is annotated with pixel-wise +annotations with 19 semantic human part labels and one +background label. LIP contains 50,462 annotated images and +be grouped into 30,462/10,000/10,000 for train/val/test. +The images in the LIP dataset are cropped person instances +from COCO [163] training and validation sets. +Remark. ATR and LIP are the mainstream benchmarks +among these single human parsing datasets. In recent +years, the research purpose has changed from “clothing” +to “human”, and the data scale and annotation quality have +also been significantly improved. +4.2 +Multiple Human Parsing (MHP) Datasets +• PASCAL-Person-Part (PPP) [160] is annotated from the +PASCAL-VOC-2010 [164], which contains 3,533 multi-person +images with challenging poses and splits into 1,716 training +images and 1,817 test images. Each image is pixel-wise +annotated with 7 classes, namely head, torso, upper/lower +arms, upper/lower legs, and a background category. +• MHP-v1.0 [161] contains 4,980 multi-person images with +fine-grained annotations at pixel-level. For each person, it +defines 7 body parts, 11 clothing/accessory categories, and +one background label. The train/val/test sets contain +3,000/1,000/980 images, respectively. +• MHP-v2.0 [162] is an extend version of MHP-v1.0 [161], +which provides more images and richer categories. MHP- +v2.0 contains 25,403 images and has great diversity in image +resolution (from 85×100 to 4,511×6,919) and human instance +number (from 2 to 26 persons). These images are split into +15,403/5,000/5,000 for train/val/test with 59 categories. +• COCO-DensePose (COCO-DP) [74] aims at establishing +the mapping between all human pixels of an RGB image and +the 3D surface of the human body, and has 27,659 images +(26,151/1,508 for train/test splits) gathered from COCO +[163]. The dataset provides 15 pixel-wise human parts with +dense keypoints annotations. +• Crowd Instance-level Human Parsing (CIHP) [8] is the +largest multiple human parsing dataset to date. With 38,280 +diverse real-world images, the persons are labelled with +pixel-wise annotations on 20 categories. It consists of 28,280 +training and 5,000 validation images with publicly available +annotations, as well as 5,000 test images with annotations +withheld for benchmarking purposes. All images of the CIHP +dataset contain two or more instances with an average of 3.4. + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +11 +TABLE 7: Statistics of existing human parsing datasets. See §4.1 - §4.3 for more detailed descriptions. The 14 datasets are divided +into 3 groups according to the human parsing taxonomy. “Instance” indicates that instance-level human labels are provided; +“Temporal” indicates that video-level labels are provided; “Super-pixel” indicates that super-pixels are used for labeling. +Dataset +Year +Pub. +#Images +#Train/Val/Test/ +#Class Purpose Instance Temporal Super-pixel +Other Annotations +Fashionista [1] +2012 +CVPR +685 +456/-/299 +56 +Clothing +- +- +✓ +Clothing-tag +CFPD [25] +2013 +TMM +2,682 +1,341/-/1,341 +23 +Clothing +- +- +✓ +Color-seg. +DailyPhotos [14] +2013 +ICCV +2,500 +2,500/-/- +19 +Clothing +- +- +✓ +Clothing-tag +PPSS [159] +2013 +ICCV +3,673 +1,781/-/1,892 +6 +Human +- +- +- +- +ATR [22] +2015 +TPAMI +7,700 +6,000/700/1,000 +18 +Human +- +- +- +- +Chictopia10k [37] +2015 +ICCV +10,000 +10,000/-/- +18 +Clothing +- +- +- +Clothing-tag +SYSU-Clothes [114] +2016 +TMM +2,682 +2,682/-/- +57 +Clothing +- +- +✓ +Clothing-tag +LIP [116] +2017 +CVPR +50,462 +30,462/10,000/10,000 +20 +Human +- +- +✓ +- +HRHP [7] +2021 CVPRW +7,500 +6,000/500/1,000 +20 +Human +- +- +- +- +PASCAL-Person-Part [160] +2014 +CVPR +3,533 +1,716/-/1,817 +7 +Human +✓ +- +- +Human-box +MHP-v1.0 [161] +2017 +ArXiv +4,980 +3,000/1,000/980 +19 +Human +✓ +- +- +Human-box +MHP-v2.0 [162] +2018 +MM +25,403 +15,403/5,000/5,000 +59 +Human +✓ +- +- +Human-box +COCO-DensePose [74] +2018 +CVPR +27,659 +26,151/-/1,508 +15 +Human +✓ +- +- +Human-box/ +keypoints/densepoints +CIHP [8] +2018 +ECCV +38,280 +28,280/5,000/5,000 +20 +Human +✓ +- +- +Human-box +VIP [9] +2018 +MM +21,246 +18,468/-/2,778 +20 +Human +✓ +✓ +- +Human-box/identity +Remark. So far, several multiple human parsing datasets +have high-quality annotation and considerable data scale. +In addition to pixel-wise parsing annotations, many +datasets provide other rich annotations, such as box, key- +points/landmark and style. PPP, CIHP and MHP-v2.0 are +widely studied datasets, and most classical multiple human +parsing methods have been verified on them. +4.3 +Video Human Parsing (VHP) Datasets +• Video Instance-level Parsing (VIP) [9] is the first video +human parsing dataset. VIP contains 404 multi-person Full +HD sequences, which are collected from Youtube with great +diversity. For every 25 consecutive frames in each sequence, +one frame is densely annotated with 20 classes and identities. +All the sequences are grouped into 354/50 for train/test, +containing 18,468/2,778 annotated frames respectively. +Remark. Since video human parsing has only attracted +attention in recent years, there are few publicly available +datasets, and its data scale and richness still need to be +continuously invested by the community. +4.4 +Summary +Through Table 7, we can observe that the human parsing +datasets show several development trends. Firstly, the scale +of datasets continues to increase, from hundreds in the +early years [1] to a tens of thousands now [8], [116]. Sec- +ondly, the quality of annotation is constantly improving. +Some early datasets use super-pixel [1], [114], [116] to +reduce the annotation cost, while in recent years, pixel- +wise accurate annotation has been adopted. Finally, the +annotation dimensions are becoming increasingly diverse, +e.g., COCO-DensePose [74] provides boxes, keypoints, and +UVs annotation in addition to parsing. +5 +PERFORMANCE COMPARISONS +To provide a more intuitive comparison, we tabulate the +performance of several previously discussed models. It +should be noted that the experimental settings of each study +are not entirely consistent (e.g., backbone, input size, training +epochs). Therefore, we suggest only taking these comparisons +as references, and a more specific analysis needs to study the +original articles deeply. +TABLE 8: Quantitative SHP results on ATR test (§5.1) in +terms of pixel accuracy (pixAcc), foreground pixel accuracy +(FGAcc) and F-1 score (F-1). The three best scores are marked in +red, blue, and green, respectively. +Year +Method +Pub. +Backbone +#Input Size #Epoch +pixAcc FGAcc +F-1 +2012 +Yamaguchi [1] +CVPR +- +- +- +84.38 +55.59 +41.80 +2013 +Paperdoll [20] +ICCV +- +- +- +88.96 +62.18 +44.76 +2015 +M-CNN [110] +CVPR +- +- +50 +89.57 +73.98 +62.81 +Co-CNN [37] +ICCV +- +150×100 +90 +95.23 +80.90 +76.95 +ATR [22] TPAMI +- +227×227 +120 +91.11 +71.04 +64.38 +2016 +LG-LSTM [34] +CVPR +VGG16 +321×321 +60 +96.18 +84.79 +80.97 +Graph-LSTM [113] +ECCV +VGG16 +321×321 +60 +97.60 +91.42 +83.76 +2017 +Struc-LSTM [115] +CVPR +VGG16 +321×321 +60 +97.71 +91.76 +87.88 +2018 +TGPNet [119] +MM +VGG16 +321×321 +35 +96.45 +87.91 +81.76 +2019 +CNIF [3] +ICCV +ResNet101 +473×473 +150 +96.26 +87.91 +85.51 +2020 +CorrPM [45] +CVPR +ResNet101 +384×384 +150 +97.12 +90.40 +86.12 +HHP [4] +CVPR +ResNet101 +473×473 +150 +96.84 +89.23 +87.25 +SCHP [52] TPAMI +ResNet101 +473×473 +150 +96.25 +87.97 +85.55 +2022 +CDGNet [131] +CVPR +ResNet101 +512×512 +250 +97.39 +90.19 +87.16 +5.1 +SHP Performance Benchmarking +We select ATR [22] and LIP [116] as the benchmark for single +human parsing performance comparison, and compared 14 +and 26 models, respectively. +5.1.1 +Evaluation Metrics +The evaluation metrics of single human parsing are basically +consistent with semantic segmentation [31], including pixel +accuracy, mean pixel accuracy, and mean IoU. In addition, +foreground pixel accuracy and F-1 score are also commonly +used metrics on the ATR dataset. +• Pixel accuracy (pixAcc) is the simplest and intuitive +metric, which expresses the proportion of pixels with correct +prediction in the overall pixel. +• Foreground pixel accuracy (FGAcc) only calculates the +pixel accuracy of foreground human parts. +• Mean pixel accuracy (meanAcc) is a simple improvement +of pixel accuracy, which calculates the proportion of correctly +predicted pixels in each category. +• Mean IoU (mIoU) is short for mean intersection over union, +which calculates the ratio of the intersection and union of +two sets. The two sets are the ground-truth and predicted +results of each category respectively. +• F-1 score (F-1) is the harmonic average of precision and +recall, which is a common evaluation metric. +5.1.2 +Results +Table 8 presents the performance of the reviewed SHP +methods on ATR test set. Struc-LSTM [115] achieves the +best performance, scoring 91.71% pixAcc. and 87.88% F-1 + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +12 +TABLE 9: Quantitative SHP results on LIP val (§5.1) in terms +of pixel accuracy (pixAcc), mean pixel accuracy (meanAcc) and +mean IoU (mIoU). The three best scores are marked in red, blue, +and green, respectively. +Year +Method +Pub. +Backbone +#Input Size #Epoch +pixAcc meanAcc mIoU +2017 +SSL [116] +CVPR +VGG16 +321×321 +50 +- +- +46.19 +2018 +HSP-PRI [76] +CVPR +InceptionV3 +- +- +85.07 +60.54 +48.16 +MMAN [50] +ECCV +ResNet101 +256×256 +30 +85.24 +57.60 +46.93 +MuLA [47] +ECCV +Hourglass +256×256 +250 +88.50 +60.50 +49.30 +JPPNet [2] TPAMI +ResNet101 +384×384 +60 +86.39 +62.32 +51.37 +2019 +CE2P [44] +AAAI +ResNet101 +473×473 +150 +87.37 +63.20 +53.10 +CNIF [3] +ICCV +ResNet101 +473×473 +150 +88.03 +68.80 +57.74 +BraidNet [57] +MM +ResNet101 +384×384 +150 +87.60 +66.09 +54.42 +2020 +CorrPM [45] +CVPR +ResNet101 +384×384 +150 +- +- +55.33 +SLRS [51] +CVPR +ResNet101 +384×384 +150 +88.33 +66.53 +56.34 +PCNet [39] +CVPR +ResNet101 +473×473 +120 +- +- +57.03 +HHP [4] +CVPR +ResNet101 +473×473 +150 +89.05 +70.58 +59.25 +DTCF [46] +MM +ResNet101 +473×473 +200 +88.61 +68.89 +57.82 +SemaTree [41] +ECCV +ResNet101 +384×384 +200 +88.05 +66.42 +54.73 +OCR [122] +ECCV +HRNetW48 +473×473 +∼100 +- +- +56.65 +BGNet [123] +ECCV +ResNet101 +473×473 +120 +- +- +56.82 +HRNet [124] TPAMI +HRNetW48 +473×473 +∼150 +88.21 +67.43 +55.90 +SCHP [52] TPAMI +ResNet101 +473×473 +150 +- +- +59.36 +2021 +HIPN [125] +AAAI +ResNet101 +473×473 +150 +89.14 +71.09 +59.61 +MCIBI [126] +ICCV +ResNet101 +473×473 +150 +- +- +55.42 +ISNet [127] +ICCV +ResNet101 +473×473 +160 +- +- +56.96 +NPPNet [128] +ICCV +NAS +384×384 +120 +- +- +58.56 +HTCorrM [129] TPAMI +HRNetW48 +384×384 +180 +- +- +56.85 +2022 +CDGNet [131] +CVPR +ResNet101 +473×473 +150 +88.86 +71.49 +60.30 +HSSN [5] +CVPR +ResNet101 +480×480 +∼84 +- +- +60.37 +PRM [43] +TMM +ResNet101 +473×473 +120 +- +- +58.86 +TABLE 10: Quantitative MHP results on PASCAL-Person-Part +test (§5.2) in terms of mIoU, APr +vol and APr +50. We only mark +the best score in red color. +Year +Method +Pub. +Pipeline +Backbone +#Epoch +mIoU APr +vol APr +50 +2017 +Holistic [138] BMVC +1S-TD +ResNet101 +100 +66.34 +38.40 40.60 +2018 +PGN [8] ECCV +BU +ResNet101 +∼80 +68.40 +39.20 39.60 +2019 Parsing R-CNN [61] +CVPR +1S-TD +ResNet50 +75 +62.70 +40.40 43.70 +Unified [139] BMVC +1S-TD +ResNet101 +∼600 +- +43.10 48.10 +2020 +RP R-CNN [140] ECCV +1S-TD +ResNet50 +75 +63.30 +40.90 44.10 +NAN [141] +IJCV +BU +- +80 +- +52.20 59.70 +2021 +MGHR [59] +CVPR +BU +ResNet101 +150 +- +55.90 59.00 +score, which greatly surpassed other methods. Table 9 shows +the method results on the LIP benchmark since 2017. Overall, +HIPN [125] and HSSN [5] achieve remarkable results in +various metrics, in which HIPN scored 89.14% pixelAcc and +HSSN scored 60.37% mIoU. +5.2 +MHP Performance Benchmarking +We select 7 models experimented on PASCAL-Person-Part +[160], 9 models experimented on CIHP [8] and 8 models +experimented on MHP-v2 [162] to compare the performance +of multiple human parsing. +5.2.1 +Evaluation Metrics +Generally speaking, multiple human parsing uses mIoU +to measure the semantic segmentation performance, and +APr +vol/APr +50 or APp +vol/APp +50 to measure the performance of +instance discrimination. +• Average precision based on region (APr +vol/APr +50) [165] is +similar to AP metrics in object detection [163]. If the IoU +between the predicted part and ground-truth part is higher +than a certain threshold, the prediction is considered to +be correct, and the mean Average Precision is calculated. +The defined APr +vol is the mean of the AP score for overlap +thresholds varying from 0.1 to 0.9 in increments of 0.1 and +APr +50 is the AP score for threshold equals 0.5. +• Average precision based on part (APp +vol/APp +50) [141], [161] +is adopted to evaluate the instance-level human parsing +performance. APp is very similar to APr in calculation mode, +except that it calculates mIoU with the whole human body. +TABLE 11: Quantitative MHP results on CIHP val (§5.2) in +terms of mIoU, APr +vol and APr +50. We only mark the best score in +red color. +Year +Method +Pub. +Pipeline +Backbone +#Epoch +mIoU APr +vol APr +50 +2018 +PGN [8] +ECCV +BU +ResNet101 +∼80 +55.80 +33.60 35.80 +2019 +CE2P [44] +AAAI +2S-TD +ResNet101 +150 +59.50 +42.80 48.70 +Parsing R-CNN [61] +CVPR +1S-TD +ResNet50 +75 +56.30 +36.50 40.90 +BraidNet [57] +MM +2S-TD +ResNet101 +150 +60.62 +43.59 48.99 +Unified [139] BMVC +1S-TD +ResNet101 +∼36 +53.50 +37.00 41.80 +2020 +RP R-CNN [140] +ECCV +1S-TD +ResNet50 +150 +60.20 +42.30 48.20 +SemaTree [41] +ECCV +2S-TD +ResNet101 +200 +60.87 +43.96 49.27 +SCHP [52] TPAMI +2S-TD +ResNet101 +150 +67.47 +52.74 58.94 +2022 +AIParsing [142] +TIP +1S-TD +ResNet101 +75 +60.70 +- +- +TABLE 12: Quantitative MHP results on MHP-v2 val (§5.2) in +terms of mIoU, APp +vol and APp +50. We only mark the best score in +red color. +Year +Method +Pub. +Pipeline +Backbone +#Epoch +mIoU APp +vol APp +50 +2019 +CE2P [44] +AAAI +2S-TD +ResNet101 +150 +41.11 +42.70 34.47 +Parsing R-CNN [61] +CVPR +1S-TD +ResNet50 +75 +36.20 +38.50 24.50 +2020 +RP R-CNN [140] +ECCV +1S-TD +ResNet50 +150 +38.60 +46.80 45.30 +SemaTree [41] +ECCV +2S-TD +ResNet101 +200 +- +42.51 34.36 +NAN [141] +IJCV +BU +- +80 +- +41.78 25.14 +SCHP [52] TPAMI +2S-TD +ResNet101 +150 +45.21 +45.25 35.10 +2021 +MGHR [59] +CVPR +BU +ResNet101 +150 +41.40 +44.30 39.00 +2022 +AIParsing [142] +TIP +1S-TD +ResNet101 +75 +40.10 +46.60 43.20 +5.2.2 +Results +PASCAL-Person-Part benchmark is the classical benchmark +in multiple human parsing. Table 10 gathers the results of 7 +models on PASCAL-Person-Part test set. PGN [8] is the top +one in mIoU metric. In APr +vol/APr +50 metrics, MGHR [59], and +NAN [141] are the best two methods at present. The results +on CIHP val set are summarized in Table 11. As seen, SCHP +[52] performs the best on all metrics, which yields 67.67% +mIoU, 52.74% APr +vol, and 58.95% APr +50. Table 12 summarizes +8 models on MHP-v2 val set. SCHP achieves the best mIoU +again. In terms of APp +vol/APp +50, RP R-CNN [140] has won the +best results so far. +5.3 +VHP Performance Benchmarking +VIP datasets is widely used to benchmark video human +parsing. We selected 11 models since 2018. +5.3.1 +Evaluation Metrics +Similar to multiple human parsing, mIoU and APr +vol are also +adopted for video human parsing performance evaluation. +5.3.2 +Results +Table 13 gives the results of recent methods on VIP val set. +It is clear that LIIR [59] and UVC+ [155] have achieved the +best performance in mIoU and APr +vol metrics respectively. +5.4 +Summary +Through the above performance comparison, we can observe +several apparent phenomena. The first and most important +is the fairness of the experimental setting. For single human +parsing and multiple human parsing, many studies have +not given detailed experimental settings, or there are great +differences in several essential hyper-parameters, resulting +fair comparison impossible. The second is that most methods +do not give the parameters number and the inference +time, which makes some methods occupy an advantage +in comparison by increasing the model capacity, and also +brings trouble to some computationally sensitive application +scenarios, such as social media and automatic driving. +In addition to the above phenomena, we can also sum- +marize some positive signals. Firstly, in recent years, human + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +13 +TABLE 13: Quantitative VHP results on VIP val (§5.2) in terms +of mIoU and APr +vol. The three best scores are marked in red, +blue, and green, respectively. +Year +Method +Pub. +Backbone +mIoU APr +vol +2019 +TimeCycle [146] +CVPR +ResNet50 +28.9 +15.6 +UVC [147] NeurIPS +ResNet18 +34.1 +17.7 +2020 +CRW [148] NeurIPS +ResNet18 +38.6 +- +2021 +ContrastCorr [149] +AAAI +ResNet18 +37.4 +21.6 +CLTC [150] +CVPR +ResNet18 +37.8 +19.1 +VFS [151] +ICCV +ResNet18 +39.9 +- +JSTG [152] +ICCV +ResNet18 +40.2 +- +2022 +LIIR [153] +CVPR +ResNet18 +41.2 +22.1 +SCC [154] +CVPR +ResNet18 +40.8 +- +UVC+ [155] +ArXiv +ResNet18 +38.3 +22.2 +parsing research has shown an upward trend, especially from +2020. Secondly, although some studies have achieved high +performance on LIP, CIHP and VIP, these benchmarks are still +not saturated. Thus the community still needs to continue its +efforts. Thirdly, some specific issues and hotspots of human +parsing are gradually attracting people’s attention, which +will further promote the progress of the whole field. +6 +AN OUTLOOK: FUTURE OPPORTUNITIES OF HU- +MAN PARSING +After ten years of long development, with the whole +community’s efforts, human parsing has made remarkable +achievements, but it has also encountered a bottleneck. In +this section, we will discuss the opportunities of human +parsing in the next era from multiple perspectives, hoping to +promote progress in the field. +6.1 +A Transformer-based Baseline for Human Parsing +Although several mainstream benchmarks of human parsing +have not been saturated, the accuracy growth has slowed +down. The reason for this, we believe, is that some advances +in deep learning have not yet benefited the human parsing +task (e.g., transformer [166]–[168], unsupervised representa- +tion learning [158], [169]–[171]), and the lack of a concise +and easily extensible code base for researchers. Therefore, +the community urgently needs a new and strong baseline. +We consider that a new human parsing baseline should +have the following four characteristics: a) Universality, +which can be applied to all mainstream human parsing +tasks, including SHP, MHP, and VIP; b) Conciseness, the +baseline method should not be too complex; c) Extensibility, +complete code base, easy to modify or expand other modules +or methods; d) High performance, state-of-the-arts or at least +comparable performance can be achieved on the mainstream +benchmarks under the fair experimental setting. Based on the +above views, we design a new transformer-based baseline +for human parsing. The proposed new baseline is based on +the Mask2Former [172] architecture, with a few improve- +ments adapted to human parsing, called Mask2Former for +Parsing (M2FP). M2FP can adapt to almost all human +parsing tasks and yield amazing performances.1 +1. Code and models are publicly available at https://github.com/ +soeaver/M2FP +6.1.1 +Mask2Former for Parsing +• Modeling Human as Group Queries. To solve the three +human parsing sub-tasks, we need to simultaneously model +the parts relationship and distinguish human instances. +DETR series work [168], [172]–[174] regard objects as queries, +and transform object detection or instance segmentation task +into a direct set prediction problem. A naive idea is to regard +human parts as queries, then use mask classification to pre- +dict the category and mask of each part. However, this creates +two problems that cannot be ignored. Firstly, only modeling +parts will make it difficult to learn the global relationship +between parts and humans; Secondly, the subordination +between part and human instance is unknown, resulting in +the inadaptability for MHP task. Thus, we introduce the body +hierarchy into the queries and use the powerful sequence +encoding ability of transformer to build multiple hierarchical +relationships between parts and humans. Specifically, we +explicitly divide the queries into three groups: background +queries, part queries and human queries. Through the +relationship modeling ability of self-attention mechanism, +besides the basic part-part relationship, the part-human, +human-human, and part/human-background relationships +are also modeled. Thanks to the direct modeling of parts and +the introduction of multiple hierarchical granularities, M2FP +can be applied to all supervised human parsing tasks. +• Architecture and Pipeline. The architecture of proposed +M2FP is illustrated in Figure 6. We try to make the smallest +modification to the Mask2Former. An encoder is used to +extract image or video features, which is composed of +a backbone and a pixel decoder [173]. Then the features +are flattened and sent into a transformer decoder. The +transformer decoder consists of multiple repeated units, +each containing a masked attention module, a self-attention +module, and a shared feed-forward network (FFN) in turn. +The grouped queries and flattened features conduct sufficient +information exchange through the transformer decoder, and +finally use bipartite matcher to match between queries and +ground-truths uniquely. For SHP, in the inference stage, +the background and part masks are combined with their +class predictions to compute the final semantic segmentation +prediction through matrix multiplication. For MHP, the +intersection ratio of semantic segmentation prediction and +human masks is calculated to obtain the final instance-level +human parsing prediction. M2FP can also be extended to +supervised VHP task. Follow [175], the background, parts, +and humans in the video can be regarded as 3D spatial- +temporal masks, and using the sequence encoding ability of +transformer to make an end-to-end prediction. +6.1.2 +Experiments +• Experimental Setup. We validate M2FP on several main- +stream benchmarks, including LIP, PASCAL-Person-Part, +CIHP, and MHP-v2. All models are trained with nearly +identical hyper-parameters under 8 NVIDIA V100 GPUs. +Specifically, we use AdamW [176] optimizer with a mini- +batch size of 16, an initial learning rate of 0.0004 with poly +(LIP) or step (PASCAL-Person-Part, CIHP, and MHP-v2) +learning rate schedule, then train each model for 150 epochs. +Large scale jittering in the range of [0.1, 2.0] and typical +data augmentation techniques, e.g., fixed size random crop +(512×384 for LIP, 800×800 for PASCAL-Person-Part, CIHP, + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +14 +… … +… … +part queries +human queries +… … +… … +… … +Encoder +Transformer Decoder +bkg queries +bkg / part / human predictions +grouping +SHP +MHP / VHP +Fig. 6: Architecture of the proposed M2FP (§6.1). Through the explicit construction of background, part and human queries, we +can model the relationship between humans and parts, and predict high-quality masks. +LIP +(mIoU) +60.0 +58.0 +56.0 +LIP +(pixAcc.) +89.0 +88.0 +87.0 +PPP +(mIoU) +70.0 +65.0 +60.0 +PPP +(APr) +56.0 +55.0 +54.0 +CIHP +(mIoU) +68.0 +66.0 +64.0 +CIHP +(APr) +60.0 +50.0 +40.0 +46.0 +MHP-v2 +(mIoU) +44.0 +42.0 +50.0 +MHP-v2 +(APp) +45.0 +40.0 +Previous SOTA +M2FP (Ours) +60.3 +(HSSN) +59.8 +53.3 +46.8 +(RP R-CNN) +68.4 +(PGN) +88.9 +72.5 +89.1 +(HIPN) +55.9 +(MGHR) +67.4 +(SCHP) +56.4 +69.1 +52.7 +(SCHP) +45.2 +(SCHP) +60.4 +47.6 +Fig. 7: Comparison of M2FP with previous human parsing +state-of-the-art models. M2FP achieves state-of-the-art (PPP, +CIHP and MHP-v2) or comparable performance (LIP) on all +human parsing sub-tasks. +and MHP-v2), random rotation from [-40◦, +40◦], random +color jittering and horizontal flip, are also used. For fair +comparison, horizontal flipping is adopted during testing, +and multi-scale test is used for LIP. The default backbone is +ResNet-101 with pre-training on ImageNet-1K [177]. +• Results. As shown in Table 14 and Figure 7, M2FP achieves +state-of-the-art or comparable performance across a broad +range of human parsing benchmarks. For SHP, M2FP only +falls behind HIPN [125] and CDGNet [131], obtaining 88.93% +pixAcc. and 59.86% mIoU, showing great potential in the +parts relationship modeling. For MHP, M2FP shows amazing +performance, greatly surpassing the existing methods on +all metrics and even exceeding the state-of-the-art two- +stage top-down method, i.e., SCHP [52]. Specifically, M2FP +outperforms PGN [8] with 4.14 point mIoU and MGHR [59] +with 0.56 point APr +vol on PASCAL-Person-Part. On the more +challenging CIHP and MHP-v2, M2FP beats SCHP in terms +TABLE 14: Overview of M2FP results on various human +parsing benchmarks. +denotes the previous state-of-the-art +results. Bold results denote M2FP achieve new state-of-the-art. +LIP +PPP +CIHP +MHP-v2 +Method +pixAcc. +mIoU +mIoU APr +vol +mIoU APr +vol +mIoU APp +vol +HIPN [125] +89.14 +59.61 +- +- +- +- +- +- +HSSN [5] +- +60.37 +- +- +- +- +- +- +PGN [8] +- +- +68.40 +39.20 +55.80 +33.60 +- +- +MGHR [59] +- +- +- +55.90 +- +- +41.40 +44.30 +SCHP [52] +- +- +- +- +67.47 +52.74 +45.21 +45.25 +RP R-CNN [140] +- +- +63.30 +40.90 +60.20 +42.30 +38.60 +46.80 +M2FP (ours) +88.93 +59.86 +72.54 +56.46 +69.15 +60.47 +47.64 +53.36 +of mIoU while running in an end-to-end manner. Meanwhile, +M2FP is also 7.96 points ahead of SCHP in APr +vol (CIHP) and +5.97 points ahead of RP R-CNN [140] in APp +vol (MHP-v2). +These results demonstrate that M2FP surpasses almost all +human parsing methods in a concise, effective and universal +way, and can be regarded as a new baseline in the next era. +6.2 +Under-Investigated Open Issues +Based on the reviewed research, we list several under- +investigated open issues that we believe should be pursued. +• Efficient Inference. In practical applications, human pars- +ing models generally need real-time or even faster inference +speed. The current research has not paid enough attention +to this issue, especially the multiple human parsing research. +Although some literature [59], [142] has discussed the model +efficiency, it can not achieve real-time inference, and there +is no human parser designed for this purpose. Therefore, +from the perspective of practical application, it is an under- +investigated open issue to design an efficient inference +human parsing model. +• Synthetic Dataset. It is a common practice in many +fields to use synthetic datasets to train models and transfer +them to real scenes. Through CG technology (e.g., NVIDIA +Omniverse2), we can obtain almost unlimited synthetic +human data at a very low cost, as well as parsing annotations. +Considering the labeling cost of human parsing dataset, +this is a very attractive scheme. Wood et al. have made a +preliminary attempt on the face parsing task and achieved +very excellent performance [178], but at present, there is a +lack of research on the human parsing field. +2. https://developer.nvidia.com/nvidia-omniverse + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +15 +• Long-tailed Phenomenon. The long-tailed distribution is +the most common phenomenon in the real world, and also +exists in the human parsing field. For example, the Gini +coefficient of MHP-v2.0 is as high as 0.747 [179], exceeding +some artificially created long-tailed datasets, but this problem +is currently ignored. Therefore, the existing methods are often +brittle once exposed to the real world, where they are unable +to adapt and robustly deal with tail categories effectively. +This calls for a more general human parsing model, with the +ability to adapt to long-tailed distributions in the real world. +6.3 +New Directions +Considering some potential applications, we shed light on +several possible research directions. +• Video Instance-level Human Parsing. The current VHP +research basically follows an unsupervised semi-automatic +video object segmentation setting, which reduces the labeling +cost in a way that greatly loses accuracy. However, most of +the practical requirements of video human parsing require +extremely high precision. Therefore, making full use of +annotations and solving the VHP issue through an instance- +discriminative manner, i.e., a fine-grained video instance +segmentation task, has great research prospects. +• Whole-body Human Parsing. Besides human parsing, face +parsing and hand parsing [72], [73] are also important issues. +To fully understand the pixel-wise temporal-spatial attributes +of human in the wild, it is necessary to parse body, face, and +hands simultaneously, which implies a new direction to end- +to-end parse the whole body: Whole-body Human Parsing. +Natural hierarchical annotation and large-scale variation +bring new challenges to existing parsing techniques. Thus +the targeted datasets and whole-body parsers are necessary. +• Cooperation across Different Human-centric Directions. +Some human-centric visual tasks (e.g., human attribute +recognition [180], pose estimation [181], human mesh re- +construction [70]) face similar challenges to human parsing. +Different tasks can play a positive role in promoting each +other, although developments of these fields are independent. +Moreover, the settings of different human-centric visual tasks +are related, while there are no precedents for modeling +these tasks in a unified framework. Thus, we call for closer +collaboration across different human-centric visual tasks. +7 +CONCLUSIONS +As far as we know, this is the first survey to comprehensively +review deep learning techniques in human parsing, covering +three sub-tasks: SHP, MHP, and VHP. We first provided +the readers with the necessary knowledge, including task +settings, background concepts, relevant problems, and ap- +plications. Afterward, we summarized the mainstream deep +learning methods based on human parsing taxonomy, and +analyzing them according to the theoretical background, +technical contributions, and solving strategies. We also +reviewed 14 popular human parsing datasets, benchmarking +results on the 6 most widely-used ones. To promote sus- +tainable community development, we discussed the under- +investigated open issues and provided insight into new +directions. We also put forward a new transformer-based +human parsing framework, servicing a high-performance +baseline for follow-up research through universal, concise, +and extensible solutions. In summary, we hope this survey to +provide an effective way to understand the current state-of- +the-art human parsing models and promote the sustainable +development of this research field. +REFERENCES +[1] +K. Yamaguchi, M. H. Kiapour, L. E. Ortiz, and T. L. Berg, “Parsing +clothing in fashion photographs,” in Proceedings of the IEEE +Conference on Computer Vision and Pattern Recognition, 2012, pp. +3570–3577. 1, 2, 5, 6, 7, 10, 11 +[2] +X. Liang, K. Gong, X. Shen, and L. Lin, “Look into person: Joint +body parsing pose estimation network and a new benchmark,” +IEEE Transactions on Pattern Analysis and Machine Intelligence, +vol. 41, no. 4, pp. 871–885, 2018. 1, 5, 6, 7, 10, 12 +[3] +W. Wang, Z. Zhang, S. Qi, J. Shen, Y. Pang, and L. Shao, “Learning +compositional neural information fusion for human parsing,” in +Proceedings of the IEEE/CVF International Conference on Computer +Vision, 2019, pp. 5703–5713. 1, 5, 6, 7, 10, 11, 12 +[4] +W. Wang, H. Zhu, J. Dai, Y. Pang, J. Shen, and L. Shao, “Hier- +archical human parsing with typed part-relation reasoning,” in +Proceedings of the IEEE/CVF Conference on Computer Vision and +Pattern Recognition, 2020, pp. 8929–8939. 1, 5, 6, 7, 11, 12 +[5] +L. Li, T. Zhou, W. Wang, J. Li, and Y. Yang, “Deep hierarchical +semantic segmentation,” in Proceedings of the IEEE/CVF Conference +on Computer Vision and Pattern Recognition, 2022, pp. 1246–1257. 1, +5, 12, 14 +[6] +L. Lin, D. Zhang, and W. Zuo, Human centric visual analysis with +deep learning. +Singapore: Springer, 2020. 1 +[7] +“Learning from limited or imperfect data (l2id) workshop,” https: +//l2id.github.io/challenge localization.html, 2021. 1, 11 +[8] +K. Gong, X. Liang, Y. Li, Y. Chen, M. Yang, and L. Lin, “Instance- +level human parsing via part grouping network,” in Proceedings of +the European Conference on Computer Vision, 2018, pp. 770–785. 1, 7, +8, 10, 11, 12, 14 +[9] +Q. Zhou, X. Liang, K. Gong, and L. Lin, “Adaptive temporal +encoding network for video instance-level human parsing,” in +Proceedings of the 26th ACM International Conference on Multimedia, +2018, pp. 1527–1535. 1, 8, 11 +[10] +A. Borras, F. Tous, J. Llados, and M. Vanrell, “High-level clothes +description based on colour-texture and structural features,” in +Iberian Conference on Pattern Recognition and Image Analysis, 2003, +pp. 108–116. 1 +[11] +H. Chen, Z. Xu, Z. Liu, and S.-C. Zhu, “Composite templates +for cloth modeling and sketching,” in Proceedings of the IEEE +Conference on Computer Vision and Pattern Recognition, 2006, pp. +943–950. 1 +[12] +P. Guan, O. Freifeld, and M. J. Black, “A 2d human body model +dressed in eigen clothing,” in Proceedings of the European Conference +on Computer Vision, 2010, pp. 285–298. 1 +[13] +Y. Yang and D. Ramanan, “Articulated pose estimation with +flexible mixtures-of-parts,” in Proceedings of the IEEE Conference on +Computer Vision and Pattern Recognition, 2011, pp. 1385–1392. 1 +[14] +J. Dong, Q. Chen, W. Xia, Z. Huang, and S. Yan, “A deformable +mixture parsing model with parselets,” in Proceedings of the IEEE +International Conference on Computer Vision, 2013, pp. 3408–3415. 1, +2, 5, 6, 7, 10, 11 +[15] +M. Caron, P. Bojanowski, A. Joulin, and M. Douze, “Deep cluster- +ing for unsupervised learning of visual features,” in Proceedings of +the European Conference on Computer Vision, 2018, pp. 139–156. 1 +[16] +L. Zhu, Y. Chen, Y. Lu, C. Lin, and A. Yuille, “Max margin and/or +graph learning for parsing the human body,” in Proceedings of the +IEEE Conference on Computer Vision and Pattern Recognition, 2008, +pp. 1–8. 1 +[17] +J. Dong, Q. Chen, X. Shen, J. Yang, and S. Yan, “Towards unified +human parsing and pose estimation,” in Proceedings of the IEEE +Conference on Computer Vision and Pattern Recognition, 2014, pp. +843–850. 1, 5, 6, 7 +[18] +A. Kae, K. Sohn, H. Lee, and E. Learned-Miller, “Augmenting +crfs with boltzmann machine shape priors for image labeling,” in +Proceedings of the IEEE Conference on Computer Vision and Pattern +Recognition, 2013, pp. 2019–2026. 1 +[19] +L. Ladicky, P. H. Torr, and A. Zisserman, “Human pose estimation +using a joint pixel-wise and part-wise formulation,” in Proceedings +of the IEEE Conference on Computer Vision and Pattern Recognition, +2013, pp. 3578–3585. 1, 3 + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +16 +[20] +K. Yamaguchi, M. Hadi Kiapour, and T. L. Berg, “Paper doll +parsing: Retrieving similar styles to parse clothing items,” in +Proceedings of the IEEE International Conference on Computer Vision, +2013, pp. 3519–3526. 1, 5, 6, 7, 11 +[21] +Y. Bo and C. C. Fowlkes, “Shape-based pedestrian parsing,” in +Proceedings of the IEEE Conference on Computer Vision and Pattern +Recognition, 2011, pp. 2265–2272. 1 +[22] +X. Liang, S. Liu, X. Shen, J. Yang, L. Liu, J. Dong, L. Lin, and S. Yan, +“Deep human parsing with active template regression,” IEEE +Transactions on Pattern Analysis and Machine Intelligence, vol. 37, +no. 12, pp. 2402–2414, 2015. 1, 5, 6, 7, 10, 11 +[23] +B. Fulkerson, A. Vedaldi, and S. Soatto, “Class segmentation and +object localization with superpixel neighborhoods,” in Proceedings +of the IEEE International Conference on Computer Vision, 2009, pp. +670–677. 1 +[24] +J. Tighe and S. Lazebnik, “Superparsing: scalable nonparametric +image parsing with superpixels,” in Proceedings of the European +Conference on Computer Vision, 2010, pp. 352–365. 1 +[25] +S. Liu, J. Feng, C. Domokos, H. Xu, J. Huang, Z. Hu, and +S. Yan, “Fashion parsing with weak color-category labels,” IEEE +Transactions on Multimedia, vol. 16, no. 1, pp. 253–265, 2013. 1, 2, 5, +6, 7, 10, 11 +[26] +A. Krizhevsky, I. Sutskever, and G. Hinton, “Imagenet classifica- +tion with deep convolutional neural networks,” in Advances in +Neural Information Processing Systems, 2012. 1 +[27] +R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature +hierarchies for accurate object detection and semantic segmenta- +tion,” in Proceedings of the IEEE Conference on Computer Vision and +Pattern Recognition, 2014, pp. 580–587. 1 +[28] +Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, +S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture +for fast feature embedding,” in Proceedings of the 22nd ACM +international conference on Multimedia, 2014, pp. 675–678. 1 +[29] +Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. +521, no. 7553, pp. 436–444, 2015. 1 +[30] +C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, +D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with +convolutions,” in Proceedings of the IEEE Conference on Computer +Vision and Pattern Recognition, 2015, pp. 1–9. 1 +[31] +E. Shelhamer, J. Long, and T. Darrell, “Fully convolutional +networks for semantic segmentation,” IEEE Transactions on Pattern +Analysis and Machine Intelligence, vol. 39, no. 4, pp. 640–651, 2016. +1, 5, 11 +[32] +K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning +for image recognition,” in Proceedings of the IEEE Conference on +Computer Vision and Pattern Recognition, 2016, pp. 770–778. 1 +[33] +L.-C. Chen, Y. Yang, J. Wang, W. Xu, and A. L. Yuille, “Attention to +scale: Scale-aware semantic image segmentation,” in Proceedings +of the IEEE Conference on Computer Vision and Pattern Recognition, +2016, pp. 3640–3649. 1, 4, 5, 7, 10 +[34] +X. Liang, X. Shen, D. Xiang, J. Feng, L. Lin, and S. Yan, “Semantic +object parsing with local-global long short-term memory,” in +Proceedings of the IEEE Conference on Computer Vision and Pattern +Recognition, 2016, pp. 3185–3193. 1, 4, 5, 7, 11 +[35] +L. Yang, Q. Song, Y. Wu, and M. Hu, “Attention inspiring +receptive-fields network for learning invariant representations,” +IEEE Transactions on Neural Networks and Learning Systems, vol. 30, +no. 6, pp. 1744–1755, 2018. 1 +[36] +B. Cheng, L.-C. Chen, Y. Wei, Y. Zhu, Z. Huang, J. Xiong, T. S. +Huang, W.-M. Hwu, and H. Shi, “Spgnet: Semantic prediction +guidance for scene parsing,” in Proceedings of the IEEE/CVF +International Conference on Computer Vision, 2019, pp. 5218–5228. 1, +4, 5, 7 +[37] +X. Liang, C. Xu, X. Shen, J. Yang, S. Liu, J. Tang, L. Lin, and +S. Yan, “Human parsing with contextualized convolutional neural +network,” in Proceedings of the IEEE International Conference on +Computer Vision, 2015, pp. 1386–1394. 1, 2, 3, 5, 7, 10, 11 +[38] +F. Xia, P. Wang, L.-C. Chen, and A. L. Yuille, “Zoom better to see +clearer: Human and object parsing with hierarchical auto-zoom +net,” in Proceedings of the European Conference on Computer Vision, +2016, pp. 648–663. 1, 5, 7 +[39] +X. Zhang, Y. Chen, B. Zhu, J. Wang, and M. Tang, “Part-aware con- +text network for human parsing,” in Proceedings of the IEEE/CVF +Conference on Computer Vision and Pattern Recognition, 2020, pp. +8971–8980. 1, 3, 5, 7, 12 +[40] +L. Yang, Q. Song, Z. Wang, Z. Liu, S. Xu, and Z. Li, “Quality-aware +network for human parsing,” arXiv preprint arXiv:2103.05997, 2021. +1 +[41] +R. Ji, D. Du, L. Zhang, L. Wen, Y. Wu, C. Zhao, F. Huang, and +S. Lyu, “Learning semantic neural tree for human parsing,” in +Proceedings of the European Conference on Computer Vision, 2020, pp. +205–221. 1, 5, 6, 7, 8, 12 +[42] +K. Gong, Y. Gao, X. Liang, X. Shen, M. Wang, and L. Lin, “Graphon- +omy: Universal human parsing via graph transfer learning,” in +Proceedings of the IEEE/CVF Conference on Computer Vision and +Pattern Recognition, 2019, pp. 7450–7459. 1, 5, 6, 7 +[43] +X. Zhang, Y. Chen, M. Tang, J. Wang, X. Zhu, and Z. Lei, “Human +parsing with part-aware relation modeling,” IEEE Transactions on +Multimedia, 2022. 1, 5, 6, 7, 12 +[44] +T. Ruan, T. Liu, Z. Huang, Y. Wei, S. Wei, and Y. Zhao, “Devil in +the details: Towards accurate single and multiple human parsing,” +in Proceedings of the AAAI Conference on Artificial Intelligence, 2019, +pp. 4814–4821. 1, 2, 5, 6, 7, 8, 10, 12 +[45] +Z. Zhang, C. Su, L. Zheng, and X. Xie, “Correlating edge, pose with +parsing,” in Proceedings of the IEEE/CVF Conference on Computer +Vision and Pattern Recognition, 2020, pp. 8900–8909. 1, 5, 6, 7, 11, 12 +[46] +Y. Liu, L. Zhao, S. Zhang, and J. Yang, “Hybrid resolution network +using edge guided region mutual information loss for human +parsing,” in Proceedings of the 28th ACM International Conference on +Multimedia, 2020, pp. 1670–1678. 1, 5, 6, 7, 12 +[47] +X. Nie, J. Feng, and S. Yan, “Mutual learning to adapt for joint +human parsing and pose estimation,” in Proceedings of the European +Conference on Computer Vision, 2018, pp. 502–517. 1, 5, 6, 7, 12 +[48] +Y. Zhao, J. Li, Y. Zhang, and Y. Tian, “From pose to part: Weakly- +supervised pose evolution for human part segmentation,” IEEE +Transactions on Pattern Analysis and Machine Intelligence, 2022. 1, 3, +5, 6, 7 +[49] +S. Liu, Y. Sun, D. Zhu, G. Ren, Y. Chen, J. Feng, and J. Han, +“Cross-domain human parsing via adversarial feature and label +adaptation,” in Proceedings of the AAAI Conference On Artificial +Intelligence, 2018, pp. 7146–7153. 1, 2, 5, 7 +[50] +Y. Luo, Z. Zheng, L. Zheng, T. Guan, J. Yu, and Y. Yang, “Macro- +micro adversarial network for human parsing,” in Proceedings of +the European Conference on Computer Vision, 2018, pp. 418–434. 1, 5, +7, 12 +[51] +T. Li, Z. Liang, S. Zhao, J. Gong, and J. Shen, “Self-learning with +rectification strategy for human parsing,” in Proceedings of the +IEEE/CVF Conference on Computer Vision and Pattern Recognition, +2020, pp. 9263–9272. 1, 3, 5, 7, 12 +[52] +P. Li, Y. Xu, Y. Wei, and Y. Yang, “Self-correction for human pars- +ing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, +2020. 1, 5, 7, 8, 11, 12, 14 +[53] +M. Mameli, M. Paolanti, R. Pietrini, G. Pazzaglia, E. Frontoni, and +P. Zingaretti, “Deep learning approaches for fashion knowledge +extraction from social media: a review,” IEEE Access, 2021. 1 +[54] +W. Cheng, S. Song, C.-Y. Chen, S. C. Hidayati, and J. Liu, “Fashion +meets computer vision: A survey,” ACM Computing Surveys, +vol. 54, no. 4, pp. 1–41, 2021. 1 +[55] +K. Khan, R. U. Khan, K. Ahmad, F. Ali, and K.-S. Kwak, “Face +segmentation: A journey from classical to deep learning paradigm, +approaches, trends, and directions,” IEEE Access, vol. 8, pp. 58 683– +58 699, 2020. 1 +[56] +S. Minaee, Y. Boykov, F. Porikli, A. Plaza, N. Kehtarnavaz, +and D. Terzopoulos, “Image segmentation using deep learning: +A survey,” IEEE Transactions on Pattern Analysis and Machine +Intelligence, 2021. 2, 5 +[57] +X. Liu, M. Zhang, W. Liu, J. Song, and T. Mei, “Braidnet: Braiding +semantics and details for accurate human parsing,” in Proceedings +of the 27th ACM International Conference on Multimedia, 2019, pp. +338–346. 2, 5, 7, 8, 12 +[58] +L. Yang, Z. Liu, T. Zhou, and Q. Song, “Part decomposition and +refinement network for human parsing,” IEEE/CAA Journal of +Automatica Sinica, 2022. 3 +[59] +T. Zhou, W. Wang, S. Liu, Y. Yang, and L. V. Gool, “Differentiable +multi-granularity human representation learning for instance- +aware human semantic parsing,” in Proceedings of the IEEE/CVF +Conference on Computer Vision and Pattern Recognition, 2021, pp. +1622–1631. 3, 7, 8, 12, 14 +[60] +Z. Liu, X. Zhu, L. Yang, X. Yan, M. Tang, Z. Lei, G. Zhu, X. Feng, +Y. Wang, and J. Wang, “Multi-initialization optimization network +for accurate 3d human pose and shape estimation,” in Proceedings +of the 29th ACM International Conference on Multimedia, 2021, pp. +1976–1984. 3 + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +17 +[61] +L. Yang, Q. Song, Z. Wang, and M. Jiang, “Parsing r-cnn for +instance-level human analysis,” in Proceedings of the IEEE/CVF +Conference on Computer Vision and Pattern Recognition, 2019, pp. +364–373. 3, 7, 8, 12 +[62] +D. de Geus, P. Meletis, C. Lu, X. Wen, and G. Dubbelman, “Part- +aware panoptic segmentation,” in Proceedings of the IEEE/CVF +Conference on Computer Vision and Pattern Recognition, 2021, pp. +5485–5494. 3 +[63] +W. Wang, T. Zhou, F. Porikli, D. Crandall, and L. V. Gool, “A +survey on deep learning technique for video segmentation,” arXiv +preprint arXiv:2107.01153, 2021. 3 +[64] +H.-S. Fang, G. Lu, X. Fang, J. Xie, Y.-W. Tai, and C. Lu, “Weakly +and semi supervised human body part parsing via pose-guided +knowledge transfer,” in Proceedings of the IEEE Conference on +Computer Vision and Pattern Recognition, 2018, pp. 70–78. 3, 5 +[65] +H. He, J. Zhang, B. Thuraisingham, and D. Tao, “Progressive +one-shot human parsing,” in Proceedings of the AAAI Conference on +Artificial Intelligence, 2021, pp. 1522–1530. 3, 5 +[66] +H. He, J. Zhang, B. Zhuang, J. Cai, and D. Tao, “End-to-end +one-shot human parsing,” arXiv preprint arXiv:2105.01241, 2021. 3 +[67] +Y. Gao, L. Liang, C. Lang, S. Feng, Y. Li, and Y. Wei, “Clicking +matters: Towards interactive human parsing,” IEEE Transactions +on Multimedia, 2022. 3 +[68] +Q. Chen, T. Ge, Y. Xu, Z. Zhang, X. Yang, and K. Gai, “Semantic +human matting,” in Proceedings of the 26th ACM International +Conference on Multimedia, 2018, pp. 618–626. 3 +[69] +J. Liu, Y. Yao, W. Hou, M. Cui, X. Xie, C. Zhang, and X.-S. Hua, +“Boosting semantic human matting with coarse annotations,” in +Proceedings of the IEEE/CVF Conference on Computer Vision and +Pattern Recognition, 2020, pp. 8563–8572. 3 +[70] +R. A. Guler and I. Kokkinos, “Holopose: Holistic 3d human recon- +struction in-the-wild,” in Proceedings of the IEEE/CVF Conference on +Computer Vision and Pattern Recognition, 2019, pp. 10 884–10 894. 3, +15 +[71] +Z. Zheng, T. Yu, Y. Wei, Q. Dai, and Y. Liu, “Deephuman: 3d +human reconstruction from a single image,” in Proceedings of the +IEEE/CVF International Conference on Computer Vision, 2019, pp. +7739–7749. 3 +[72] +H. Liang, J. Yuan, and D. Thalmann, “Parsing the hand in depth +images,” IEEE Transactions on Multimedia, vol. 16, no. 5, pp. 1241– +1253, 2014. 3, 15 +[73] +J. Lin, H. Yang, D. Chen, M. Zeng, F. Wen, and L. Yuan, “Face +parsing with roi tanh-warping,” in Proceedings of the IEEE/CVF +Conference on Computer Vision and Pattern Recognition, 2019, pp. +5654–5663. 3, 15 +[74] +R. A. Guler, N. Neverova, and I. Kokkinos, “Densepose: Dense +human pose estimation in the wild,” in Proceedings of the IEEE +Conference on Computer Vision and Pattern Recognition, 2018, pp. +7297–7306. 3, 10, 11 +[75] +T. Zhu, P. Karlsson, and C. Bregler, “Simpose: Effectively learning +densepose and surface normals of people from simulated data,” +in Proceedings of the European Conference on Computer Vision, 2020, +pp. 225–242. 3 +[76] +M. M. Kalayeh, E. Basaran, M. Gokmen, M. E. Kamasak, and +M. Shah, “Human semantic parsing for person re-identification,” +in Proceedings of the IEEE Conference on Computer Vision and Pattern +Recognition, 2018, pp. 1062–1071. 3, 12 +[77] +W. Yang, H. Huang, Z. Zhang, X. Chen, K. Huang, and S. Zhang, +“Towards rich feature discovery with class activation maps +augmentation for person re-identification,” in Proceedings of the +IEEE/CVF Conference on Computer Vision and Pattern Recognition, +2019, pp. 1389–1398. 3 +[78] +Y. Sun, L. Zheng, Y. Li, Y. Yang, Q. Tian, and S. Wang, “Learning +part-based convolutional features for person re-identification,” +IEEE Transactions on Pattern Analysis and Machine Intelligence, +vol. 43, no. 3, pp. 902–917, 2019. 3 +[79] +H. Huang, W. Yang, J. Lin, G. Huang, J. Xu, G. Wang, X. Chen, and +K. Huang, “Improve person re-identification with part awareness +learning,” 2, vol. 29, pp. 7468–7481, 2020. 3 +[80] +Z. Li, J. Lv, Y. Chen, and J. Yuan, “Person re-identification with part +prediction alignment,” Computer Vision and Image Understanding, +vol. 205, 2021. 3 +[81] +M. Tian, S. Yi, H. Li, S. Li, X. Zhang, J. Shi, J. Yan, and X. Wang, +“Eliminating background-bias for robust person re-identification,” +in Proceedings of the IEEE Conference on Computer Vision and Pattern +Recognition, 2018, pp. 5794–5803. 3 +[82] +Y. Chen, X. Zhu, and S. Gong, “Instance-guided context rendering +for cross-domain person re-identification,” in Proceedings of the +IEEE/CVF International Conference on Computer Vision, 2019, pp. +232–242. 3 +[83] +S. Yu, S. Li, D. Chen, R. Zhao, J. Yan, and Y. Qiao, “Cocas: A +large-scale clothes changing person dataset for re-identification,” +in Proceedings of the IEEE/CVF Conference on Computer Vision and +Pattern Recognition, 2020, pp. 3400–3409. 3 +[84] +X. Qian, W. Wang, L. Zhang, F. Zhu, Y. Fu, X. Tao, Y.-G. Jiang, and +X. Xue, “Long-term cloth-changing person re-identification,” in +Proceedings of the Asian Conference on Computer Vision, 2020, pp. +71–88. 3 +[85] +X. Han, Z. Wu, Z. Wu, R. Yu, and L. S. Davis, “Viton: An image- +based virtual try-on network,” in Proceedings of the IEEE Conference +on Computer Vision and Pattern Recognition, 2018, pp. 7543–7552. 4 +[86] +B. Wang, H. Zheng, X. Liang, Y. Chen, L. Lin, and M. Yang, +“Toward characteristic-preserving image-based virtual try-on +network,” in Proceedings of the European Conference on Computer +Vision, 2018, pp. 589–604. 4 +[87] +R. Yu, X. Wang, and X. Xie, “Vtnfp: An image-based virtual try- +on network with body and clothing feature preservation,” in +Proceedings of the IEEE/CVF International Conference on Computer +Vision, 2019, pp. 10 511–10 520. 4 +[88] +Z. Wu, G. Lin, Q. Tao, and J. Cai, “M2e-try on net: Fashion from +model to everyone,” in Proceedings of the 27th ACM International +Conference on Multimedia, 2019, pp. 293–301. 4 +[89] +H. Dong, X. Liang, X. Shen, B. Wang, H. Lai, J. Zhu, Z. Hu, and +J. Yin, “Towards multi-pose guided virtual try-on network,” in +Proceedings of the IEEE/CVF International Conference on Computer +Vision, 2019, pp. 9026–9035. 4 +[90] +G. Liu, D. Song, R. Tong, and M. Tang, “Toward realistic virtual +try-on through landmark-guided shape matching,” in Proceedings +of the AAAI Conference on Artificial Intelligence, 2021, pp. 2118–2126. +4 +[91] +Z. Xie, X. Zhang, F. Zhao, H. Dong, M. Kampffmeyer, H. Yan, and +X. Liang, “Was-vton: Warping architecture search for virtual try-on +network,” in Proceedings of the 29th ACM International Conference +on Multimedia, 2021, pp. 3350–3359. 4 +[92] +F. Zhao, Z. Xie, M. Kampffmeyer, H. Dong, S. Han, T. Zheng, +T. Zhang, and X. Liang, “M3d-vton: A monocular-to-3d virtual try- +on network,” in Proceedings of the IEEE/CVF International Conference +on Computer Vision, 2021, pp. 13 239–13 249. 4 +[93] +T. Issenhuth, J. Mary, and C. Calauzenes, “Do not mask what you +do not need to mask: a parser-free virtual try-on,” in Proceedings +of the European Conference on Computer Vision, 2020, pp. 619–635. 4 +[94] +Y. Chang, T. Peng, R. He, X. Hu, J. Liu, Z. Zhang, and M. Jiang, +“Pf-vton: Toward high-quality parser-free virtual try-on network,” +in International Conference on Multimedia Modeling, 2022, pp. 28–40. +4 +[95] +C. Lin, Z. Li, S. Zhou, S. Hu, J. Zhang, L. Luo, J. Zhang, L. Huang, +and Y. He, “Rmgn: A regional mask guided network for parser- +free virtual try-on,” arXiv preprint arXiv:2204.11258, 2022. 4 +[96] +I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, +S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial +nets,” in Advances in Neural Information Processing Systems, 2014. 4, +7 +[97] +T. Karras, S. Laine, and T. Aila, “A style-based generator archi- +tecture for generative adversarial networks,” in Proceedings of the +IEEE/CVF Conference on Computer Vision and Pattern Recognition, +2019, pp. 4401–4410. 4 +[98] +M. Niemeyer and A. Geiger, “Giraffe: Representing scenes as +compositional generative neural feature fields,” in Proceedings of +the IEEE/CVF Conference on Computer Vision and Pattern Recognition, +2021, pp. 11 453–11 464. 4 +[99] +A. Nichol, P. Dhariwal, A. Ramesh, P. Shyam, P. Mishkin, B. Mc- +Grew, I. Sutskever, and M. Chen, “Glide: Towards photorealistic +image generation and editing with text-guided diffusion models,” +arXiv preprint arXiv:2112.10741, 2021. 4 +[100] B. Wu, Z. Xie, X. Liang, Y. Xiao, H. Dong, and L. Lin, “Image +comes dancing with collaborative parsing-flow video synthesis,” +IEEE Transactions on Image Processing, vol. 30, pp. 9259–9269, 2021. +4 +[101] A. Fruhstuck, K. K. Singh, E. Shechtman, J. Mitra, Niloy, P. Wonka, +and J. Lu, “Insetgan for full-body image generation,” arXiv preprint +arXiv:2203.07293, 2022. 4 +[102] R. Chen, X. Chen, B. Ni, and Y. Ge, “Simswap: An efficient +framework for high fidelity face swapping,” in Proceedings of + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +18 +the 28th ACM International Conference on Multimedia, 2020, pp. +2003–2011. 4 +[103] L. Yang, Q. Song, and Y. Wu, “Attacks on state-of-the-art face +recognition using attentional adversarial attack generative net- +work,” Multimedia Tools and Applications, vol. 80, no. 1, pp. 855–875, +2021. 4 +[104] Y. Liu, W. Chen, L. Liu, and M. S. Lew, “Swapgan: A multistage +generative approach for person-to-person fashion style transfer,” +IEEE Transactions on Multimedia, vol. 21, no. 9, pp. 2209–2222, 2019. +4 +[105] J. Huo, S. Jin, W. Li, J. Wu, Y.-K. Lai, Y. Shi, and Y. Gao, “Manifold +alignment for semantically aligned style transfer,” in Proceedings +of the IEEE/CVF International Conference on Computer Vision, 2021, +pp. 14 861–14 869. 4 +[106] Z. Ma, T. Lin, X. Li, F. Li, D. He, E. Ding, N. Wang, and X. Gao, +“Dual-affinity style embedding network for semantic-aligned +image style transfer,” IEEE Transactions on Neural Networks and +Learning Systems, 2022. 4 +[107] B.-K. Kim, G. Kim, and S.-Y. Lee, “Style-controlled synthesis +of clothing segments for fashion image manipulation,” IEEE +Transactions on Multimedia, vol. 22, no. 2, pp. 298–310, 2019. 4 +[108] E. Ntavelis, A. Romero, I. Kastanis, L. V. Gool, and R. Timofte, +“Sesame: Semantic editing of scenes by adding, manipulating +or erasing objects,” in Proceedings of the European Conference on +Computer Vision, 2020, pp. 394–411. 4 +[109] H.-Y. Tseng, M. Fisher, J. Lu, Y. Li, V. Kim, and M.-H. Yang, +“Modeling artistic workflows for image generation and editing,” +in Proceedings of the European Conference on Computer Vision, 2020, +pp. 158–174. 4 +[110] S. Liu, X. Liang, L. Liu, X. Shen, J. Yang, C. Xu, and L. Lin, +“Matching-cnn meets knn: Quasi-parametric human parsing,” in +Proceedings of the IEEE Conference on Computer Vision and Pattern +Recognition, 2015, pp. 1419–1427. 5, 11 +[111] S. Liu, X. Liang, L. Liu, K. Lu, L. Lin, X. Cao, and S. Yan, “Fashion +parsing with video context,” IEEE Transactions on Multimedia, +vol. 17, no. 8, pp. 1347–1358, 2015. 5, 6, 7 +[112] F. Xia, J. Zhu, P. Wang, and A. L. Yuille, “Pose-guided human pars- +ing by an and/or graph using pose-context features,” Proceedings +of the AAAI Conference on Artificial Intelligence, pp. 3632–3640, 2016. +5 +[113] X. Liang, X. Shen, J. Feng, L. Lin, and S. Yan, “Semantic object +parsing with graph lstm,” in Proceedings of the European Conference +on Computer Vision, 2016, pp. 125–143. 4, 5, 7, 11 +[114] X. Liang, L. Lin, W. Yang, P. Luo, J. Huang, and S. Yan, “Clothes co- +parsing via joint image segmentation and labeling with application +to clothing retrieval,” IEEE Transactions on Multimedia, vol. 18, no. 6, +pp. 1175–1186, 2016. 5, 6, 7, 10, 11 +[115] X. Liang, L. Lin, X. Shen, J. Feng, S. Yan, and E. P. Xing, +“Interpretable structure-evolving lstm,” in Proceedings of the IEEE +Conference on Computer Vision and Pattern Recognition, 2017, pp. +1010–1019. 4, 5, 7, 11 +[116] K. Gong, X. Liang, D. Zhang, X. Shen, and L. Lin, “Look +into person: Self-supervised structure-sensitive learning and a +new benchmark for human parsing,” in Proceedings of the IEEE +Conference on Computer Vision and Pattern Recognition, 2017, pp. +932–940. 5, 6, 7, 10, 11, 12 +[117] F. Xia, P. Wang, X. Chen, and A. L. Yuille, “Joint multi-person pose +estimation and semantic part segmentation,” in Proceedings of the +IEEE Conference on Computer Vision and Pattern Recognition, 2017, +pp. 6769–6778. 5 +[118] B. Zhu, Y. Chen, M. Tang, and J. Wang, “Progressive cognitive +human parsing,” in Proceedings of the AAAI Conference on Artificial +Intelligence, 2018, pp. 7607–7614. 5, 6, 7 +[119] X. Luo, Z. Su, and J. Guo, “Trusted guidance pyramid network +for human parsing,” in Proceedings of the 26th ACM International +Conference on Multimedia, 2018, pp. 654–662. 5, 7, 11 +[120] Y. Zhao, J. Li, Y. Zhang, and Y. Tian, “Multi-class part parsing +with joint boundary-semantic awareness,” in Proceedings of the +IEEE/CVF International Conference on Computer Vision, 2019, pp. +9177–9186. 5, 6, 7 +[121] H. He, J. Zhang, Q. Zhang, and D. Tao, “Grapy-ml: Graph pyramid +mutual learning for cross-dataset human parsing,” in Proceedings of +the AAAI Conference on Artificial Intelligence, 2020, pp. 10 949–10 956. +4, 5, 7 +[122] Y. Yuan, X. Chen, and J. Wang, “Object-contextual representa- +tions for semantic segmentation,” in Proceedings of the European +Conference on Computer Vision, 2020, pp. 173–190. 5, 12 +[123] X. Zhang, Y. Chen, B. Zhu, J. Wang, and M. Tang, “Blended +grammar network for human parsing,” in Proceedings of the +European Conference on Computer Vision, 2020, pp. 189–205. +5, +6, 7, 12 +[124] J. Wang, K. Sun, T. Cheng, B. Jiang, C. Deng, Y. Zhao, D. Liu, Y. Mu, +M. Tan, X. Wang, W. Liu, and B. Xiao, “Deep high-resolution +representation learning for visual recognition,” IEEE Transactions +on Pattern Analysis and Machine Intelligence, vol. 43, no. 10, pp. +3349–3364, 2020. 5, 12 +[125] Y. Liu, S. Zhang, J. Yang, and P. Yuen, “Hierarchical information +passing based noise-tolerant hybrid learning for semi-supervised +human parsing,” in Proceedings of the AAAI Conference on Artificial +Intelligence, 2021, pp. 2207–2215. 5, 7, 12, 14 +[126] Z. Jin, T. Gong, D. Yu, Q. Chu, J. Wang, C. Wang, and J. Shao, +“Mining contextual information beyond image for semantic seg- +mentation,” in Proceedings of the IEEE/CVF International Conference +on Computer Vision, 2021, pp. 7231–7241. 5, 12 +[127] Z. Jin, B. Liu, Q. Chu, and N. Yu, “Isnet: Integrate image-level and +semantic-level context for semantic segmentation,” in Proceedings +of the IEEE/CVF International Conference on Computer Vision, 2021, +pp. 7189–7198. 5, 12 +[128] D. Zeng, Y. Huang, Q. Bao, J. Zhang, C. Su, and W. Liu, “Neural +architecture search for joint human parsing and pose estimation,” +in Proceedings of the IEEE/CVF International Conference on Computer +Vision, 2021, pp. 11 385–11 394. 5, 6, 7, 12 +[129] Z. Zhang, C. Su, L. Zheng, X. Xie, and Y. Li, “On the correlation +among edge, pose and parsing,” IEEE Transactions on Pattern +Analysis and Machine Intelligence, 2021. 5, 6, 7, 12 +[130] W. Wang, T. Zhou, S. Qi, J. Shen, and S.-C. Zhu, “Hierarchical +human semantic parsing with comprehensive part-relation model- +ing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, +2021. 5, 6, 7 +[131] K. Liu, O. Choi, J. Wang, and W. Hwang, “Cdgnet: Class distri- +bution guided network for human parsing,” in Proceedings of the +IEEE/CVF Conference on Computer Vision and Pattern Recognition, +2021, pp. 4473–4482. 5, 7, 11, 12, 14 +[132] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” +Neural Computation, vol. 9, no. 8, pp. 1735—1780, 1997. 4 +[133] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing +network,” in Proceedings of the IEEE Conference on Computer Vision +and Pattern Recognition, 2017, pp. 2881–2890. 5 +[134] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. +Yuille, “Deeplab: Semantic image segmentation with deep convo- +lutional nets, atrous convolution, and fully connected crfs,” IEEE +Transactions on Pattern Analysis and Machine Intelligence, vol. 40, +no. 4, pp. 834–848, 2017. 5 +[135] T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and +S. Belongie, “Feature pyramid networks for object detection,” +in Proceedings of the IEEE Conference on Computer Vision and Pattern +Recognition, 2017, pp. 2117–2125. 5 +[136] A. Kirillov, R. Girshick, K. He, and P. Dollar, “Panoptic feature +pyramid networks,” in Proceedings of the IEEE/CVF Conference on +Computer Vision and Pattern Recognition, 2019, pp. 6399–6408. 5 +[137] J. Fang, Y. Sun, Q. Zhang, Y. Li, W. Liu, and X. Wang, “Densely +connected search space for more flexible neural architecture +search,” in Proceedings of the IEEE/CVF Conference on Computer +Vision and Pattern Recognition, 2020, pp. 10 628–10 637. 6 +[138] Q. Li, A. Arnab, and P. H. Torr, “Holistic, instance-level human +parsing,” in British Machine Vision Conference, 2017. 7, 8, 12 +[139] H. Qin, W. Hong, W.-C. Hung, Y.-H. Tsai, and M.-H. Yang, “A +top-down unified framework for instance-level human parsing,” +in British Machine Vision Conference, 2019. 7, 8, 12 +[140] L. Yang, Q. Song, Z. Wang, M. Hu, C. Liu, X. Xin, W. Jia, and S. Xu, +“Renovating parsing r-cnn for accurate multiple human parsing,” +in Proceedings of the European Conference on Computer Vision, 2020, +pp. 421–437. 7, 8, 12, 14 +[141] J. Zhao, J. Li, H. Liu, S. Yan, and J. Feng, “Fine-grained multi- +human parsing,” International Journal of Computer Vision, vol. 128, +no. 8, pp. 2185–2203, 2020. 7, 8, 12 +[142] S. Zhang, X. Cao, G.-J. Qi, Z. Song, and J. Zhou, “Aiparsing: +Anchor-free instance-level human parsing,” IEEE Transactions on +Image Processing, 2022. 7, 8, 12, 14 +[143] M. Kiefel and P. Gehler, “Human pose estimation with fields of +parts,” in Proceedings of the European Conference on Computer Vision, +2014, pp. 331—346. 7 + +IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE +19 +[144] K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask r-cnn,” in +Proceedings of the IEEE International Conference on Computer Vision, +2017, pp. 2961–2969. 7 +[145] Z. Tian, C. Shen, H. Chen, and T. He, “Fcos: A simple and strong +anchor-free object detector,” IEEE Transactions on Pattern Analysis +and Machine Intelligence, vol. 44, no. 4, pp. 1922–1933, 2020. 8 +[146] X. Wang, A. Jabri, and A. A. Efros, “Learning correspondence +from the cycle-consistency of time,” in Proceedings of the IEEE/CVF +Conference on Computer Vision and Pattern Recognition, 2019, pp. +2566–2576. 8, 13 +[147] X. Li, S. Liu, S. D. Mello, X. Wang, J. Kautz, and M.-H. Yang, +“Joint-task self-supervised learning for temporal correspondence,” +in Advances in Neural Information Processing Systems, 2019, pp. +318–328. 8, 13 +[148] A. A. Jabri, A. Owens, and A. A. Efros, “Space-time correspon- +dence as a contrastive random walk,” in Advances in Neural +Information Processing Systems, 2020, pp. 19 545–19 560. 8, 13 +[149] N. Wang, W. Zhou, and H. Li, “Contrastive transformation for +self-supervised correspondence learning,” in Proceedings of the +AAAI Conference on Artificial Intelligence, 2021, pp. 10 174–10 182. 8, +13 +[150] S. Jeon, D. Min, S. Kim, and K. Sohn, “Mining better samples for +contrastive learning of temporal correspondence,” in Proceedings of +the IEEE/CVF Conference on Computer Vision and Pattern Recognition, +2021, pp. 1034–1044. 8, 9, 13 +[151] J. Xu and X. Wang, “Rethinking self-supervised correspondence +learning: A video frame-level similarity perspective,” in Proceed- +ings of the IEEE/CVF International Conference on Computer Vision, +2021, pp. 10 075–10 085. 8, 9, 13 +[152] Z. Zhao, Y. Jin, and P.-A. Heng, “Modelling neighbor relation +in joint space-time graph for video correspondence learning,” in +Proceedings of the IEEE/CVF International Conference on Computer +Vision, 2021, pp. 9960–9969. 8, 9, 13 +[153] L. Li, T. Zhou, W. Wang, L. Yang, J. Li, and Y. Yang, “Locality- +aware inter-and intra-video reconstruction for self-supervised +correspondence learning,” in Proceedings of the IEEE/CVF Confer- +ence on Computer Vision and Pattern Recognition, 2022. 8, 13 +[154] J. Son, “Contrastive learning for space-time correspondence via +self-cycle consistency,” in Proceedings of the IEEE/CVF Conference +on Computer Vision and Pattern Recognition, 2022, pp. 14 679–14 688. +8, 9, 13 +[155] D. Mckee, Z. Zhan, B. Shuai, D. Modolo, J. Tighe, and S. Lazebnik, +“Transfer of representations to video label propagation: implemen- +tation factors matter,” arXiv preprint arXiv:2203.05553., 2022. 8, 12, +13 +[156] C. Vondrick, A. Shrivastava, A. Fathi, S. Guadarrama, and K. Mur- +phy, “Tracking emerges by colorizing videos,” in Proceedings of the +European Conference on Computer Vision, 2018, pp. 391–408. 8 +[157] S. Liu, G. Zhong, S. D. Mello, J. Gu, V. Jampani, M.-H. Yang, +and J. Kautz, “Switchable temporal propagation network,” in +Proceedings of the European Conference on Computer Vision, 2018, pp. +87–102. 8 +[158] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast +for unsupervised visual representation learning,” in Proceedings of +the IEEE/CVF Conference on Computer Vision and Pattern Recognition, +2020, pp. 9729–9738. 9, 13 +[159] P. Luo, X. Wang, and X. Tang, “Pedestrian parsing via deep +decompositional network,” in Proceedings of the IEEE International +Conference on Computer Vision, 2013, pp. 2648–2655. 10, 11 +[160] X. Chen, R. Mottaghi, X. Liu, S. Fidler, R. Urtasun, and A. Yuille, +“Detect what you can: Detecting and representing objects using +holistic models and body parts,” in Proceedings of the IEEE +Conference on Computer Vision and Pattern Recognition, 2014, pp. +1971–1978. 10, 11, 12 +[161] J. Li, J. Zhao, Y. Wei, C. Lang, Y. Li, T. Sim, S. Yan, and +J. Feng, “Multiple-human parsing in the wild,” arXiv preprint +arXiv:1705.07206, 2017. 10, 11, 12 +[162] J. Zhao, J. Li, Y. Cheng, T. Sim, S. Yan, and J. Feng, “Understanding +humans in crowded scenes: Deep nested adversarial learning and +a new benchmark for multi-human parsing,” in Proceedings of the +26th ACM International Conference on Multimedia, 2018, pp. 792–800. +10, 11, 12 +[163] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, +P. Doll´ar, and C. L. Zitnick, “Microsoft coco: Common objects +in context,” in Proceedings of the European Conference on Computer +Vision, 2014, pp. 740–755. 10, 12 +[164] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and +A. Zisserman, “The pascal visual object classes (voc) challenge,” +International Journal of Computer Vision, vol. 88, no. 2, pp. 303–338, +2010. 10 +[165] B. Hariharan, P. Arbelaez, R. Girshick, and J. Malik, “Simultane- +ous detection and segmentation,” in Proceedings of the European +Conference on Computer Vision, 2014, pp. 297–312. 12 +[166] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. +Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” +in Advances in Neural Information Processing Systems, 2017, pp. +6000–6010. 13 +[167] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, +T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, +J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: +Transformers for image recognition at scale,” in Proceedings of the +International Conference on Learning Representations, 2020. 13 +[168] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and +S. Zagoruyko, “End-to-end object detection with transformers,” in +Proceedings of the European Conference on Computer Vision, 2020, pp. +213–229. 13 +[169] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre- +training of deep bidirectional transformers for language under- +standing,” in Proceedings of the Annual Conference of the North +American Chapter of the Association for Computational Linguistics: +Human Language Technologies, 2019, pp. 4171–4186. 13 +[170] H. Bao, L. Dong, S. Piao, and F. Wei, “Beit: Bert pre-training of +image transformers,” in Proceedings of the International Conference +on Learning Representations, 2022. 13 +[171] K. He, X. Chen, S. Xie, Y. Li, P. Dollar, and R. Girshick, “Masked +autoencoders are scalable vision learners,” in Proceedings of the +IEEE/CVF Conference on Computer Vision and Pattern Recognition, +2022. 13 +[172] B. Cheng, I. Misra, A. G. Schwing, A. Kirillov, and R. Girdhar, +“Masked-attention mask transformer for universal image segmen- +tation,” in Proceedings of the IEEE/CVF Conference on Computer +Vision and Pattern Recognition, 2022. 13 +[173] X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai, “Deformable +detr: Deformable transformers for end-to-end object detection,” in +Proceedings of the International Conference on Learning Representations, +2021. 13 +[174] B. Cheng, A. G. Schwing, and A. Kirillov, “Per-pixel classification +is not all you need for semantic segmentation,” in Advances in +Neural Information Processing Systems, 2021, pp. 17 864–17 875. 13 +[175] B. Cheng, A. Choudhuri, I. Misra, A. Kirillov, R. Girdhar, and +A. G. Schwing, “Mask2former for video instance segmentation,” +arXiv preprint arXiv:2112.10764, 2021. 13 +[176] I. Loshchilov and F. Hutter, “Decoupled weight decay regular- +ization,” in Proceedings of the International Conference on Learning +Representations, 2018. 13 +[177] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, +Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, +and F.-F. Li, “Imagenet large scale visual recognition challenge,” +International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, +2015. 14 +[178] E. Wood, T. Baltrusaitis, C. Hewitt, S. Dziadzio, M. Johnson, +V. Estellers, T. J. Cashman, and J. Shotton, “Fake it till you +make it: Face analysis in the wild using synthetic data alone,” +in Proceedings of the IEEE/CVF Conference on Computer Vision and +Pattern Recognition, 2021, pp. 3681–3691. 14 +[179] L. Yang, H. Jiang, Q. Song, and J. Guo, “A survey on long-tailed +visual recognition,” International Journal of Computer Vision, 2022. +15 +[180] L. Yang, Q. Song, Z. Wang, M. Hu, and C. Liu, “Hier r-cnn: +Instance-level human parts detection and a new benchmark,” +IEEE Transactions on Image Processing, vol. 30, pp. 39–54, 2020. 15 +[181] C. Zheng, W. Wu, T. Yang, S. Zhu, C. Chen, R. Liu, J. Shen, +N. Kehtarnavaz, and M. Shah, “Deep learning-based human pose +estimation: A survey,” arXiv preprint arXiv:2012.13392, 2020. 15 + diff --git a/Y9AyT4oBgHgl3EQfifgI/content/tmp_files/load_file.txt b/Y9AyT4oBgHgl3EQfifgI/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..80bc42176830b89369f6e050f358824a5470190a --- /dev/null +++ b/Y9AyT4oBgHgl3EQfifgI/content/tmp_files/load_file.txt @@ -0,0 +1,2874 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf,len=2873 +page_content='IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Deep Learning Technique for Human Parsing: A Survey and Outlook Lu Yang, Wenhe Jia, Shan Li, Qing Song Abstract—Human parsing aims to partition humans in image or video into multiple pixel-level semantic parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In the last decade, it has gained significantly increased interest in the computer vision community and has been utilized in a broad range of practical applications, from security monitoring, to social media, to visual special effects, just to name a few.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Although deep learning-based human parsing solutions have made remarkable achievements, many important concepts, existing challenges, and potential research directions are still confusing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In this survey, we comprehensively review three core sub-tasks: single human parsing, multiple human parsing, and video human parsing, by introducing their respective task settings, background concepts, relevant problems and applications, representative literature, and datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We also present quantitative performance comparisons of the reviewed methods on benchmark datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Additionally, to promote sustainable development of the community, we put forward a transformer-based human parsing framework, providing a high-performance baseline for follow-up research through universal, concise, and extensible solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Finally, we point out a set of under-investigated open issues in this field and suggest new directions for future study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We also provide a regularly updated project page, to continuously track recent developments in this fast-advancing field: https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='com/soeaver/awesome-human-parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Index Terms—Human Parsing, Human Parsing Datasets, Deep Learning, Literature Survey !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 INTRODUCTION H UMAN parsing [1]–[5], considered as the fundamental task of human-centric visual understanding [6], aims to classify the human parts and clothing accessories in images or videos at pixel-level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Numerous studies have been conducted on human parsing due to its crucial role in widespread application areas, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', security monitoring, autonomous driving, social media, electronic commerce, visual special effects, artistic creation, giving birth to various excellent human parsing solutions and applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' As early as the beginning of this century, some studies tried to identify the level of upper body clothing [10], the grammatical representations of clothing [11] and the deforma- tion of body contour [12] under very limited circumstances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' These early studies facilitated the research on pixel-level human parts and clothing recognition, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', human parsing task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Immediately afterward, some traditional machine learn- ing and computer vision techniques were utilized to solve human parsing problems, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', structured model [1], [13], [14], clustering algorithm [15], grammar model [16], [17], conditional random field [18]–[20], template matching [21], [22] and super-pixel [23]–[25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Afterward, the prosperity of deep learning and convolutional neural network [26]– [32] has further promoted the vigorous development of human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Attention mechanism [33]–[36], scale-aware features [37]–[40], tree structure [3], [41], graph structure [4], [42], [43], edge-aware learning [44]–[46], pose-aware learning [2], [47], [48] and other technologies [49]–[52] greatly improved the performance of human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' However, some existing challenges and under-investigated issues make Lu Yang, Wenhe Jia, Shan Li, Qing Song are with the Beijing University of Posts and Telecommunications, Beijing, 100876, China (e-mail: soeaver@bupt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='cn;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' jiawh@bupt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='cn;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' ls1995@bupt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='cn;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' priv@bupt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='cn) Corresponding author: Qing Song (email: priv@bupt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='cn) (b) Multiple human parsing (MHP) (a) Single human parsing (SHP) (c) Video human parsing (VHP) human instance discrimination temporal correspondence learning parts relationship modeling Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1: Human parsing tasks reviewed in this survey: (a) single human parsing (SHP) [7];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' (b) multiple human parsing (MHP) [8];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' (c) video human parsing (VHP) [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' human parsing still a task worthy of further exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' With the rapid development of human parsing, several literature reviews have been produced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' However, existing surveys are not precise and in-depth: some surveys only provide a superficial introduction of human parsing from a macro fashion/social media perspective [53], [54], or only review a sub-task of human parsing from a micro face parsing perspective [55].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In addition, due to the fuzziness of taxonomy and the diversity of methods, comprehensive and in-depth investigation is highly needed and helpful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='00394v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CV] 1 Jan 2023 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2 Human Parsing Challenges (§2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1) Relevant Tasks (§2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3) Large Intra-class Variation Unconstrained Poses Occlusion Single Human Parsing (SHP) Multiple Human Parsing (MHP) Video Human Parsing (VHP) Pose Estimation Image Segmentation Dense Pose Estimation Person Re-identification Virtual Try-on Conditional Human Image Generation SHP Models (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1) MHP Models (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2) VHP Models (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3) Context Learning Structured Representation Multi-task Learning Other Modeling Models Bottom-up One-stage Top-down Two-stage Top-down Cycle-tracking Reconstructive Learning Contrastive Learning SHP Datasets (§4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1) VHP Datasets (§4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3) ATR … … LIP VIP SHP Benchmarking (§5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1) VHP Benchmarking (§5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3) MHP Datasets (§4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2) PASCAL-Person-Part … … CIHP Outlook (§6) A Transformer-based Baseline (§6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1) Under-Investigated Open Issues (§6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2) New Directions (§6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3) Efficient Inference Synthetic Dataset Long-tailed Phenomenon Video Instance-level Human Parsing Whole-body Human Parsing Cooperation across Different Human-centric Directions Taxonomy (§2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2) Applications (§2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='4) MHP Benchmarking (§5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2: Outline of this survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' response, we provide the first review that systematically introduces background concepts, recent advances, and an outlook on human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 Scope This survey reviews human parsing from a comprehensive perspective, including not only single human parsing (Fig- ure 1 (a)) but also multiple human parsing (Figure 1 (b)) and video human parsing (Figure 1 (c)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' At the technical level, this survey focuses on the deep learning-based human parsing methods and datasets in recent ten years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' To provide the necessary background, it also introduces some relevant litera- ture from non-deep learning and other fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' At the practical level, the advantages and disadvantages of various methods are compared, and detailed performance comparisons are given.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In addition to summarizing and analyzing the existing work, we also give an outlook for the future opportunities of human parsing and put forward a new transformer-based baseline to promote sustainable development of the commu- nity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' A curated list of human parsing methods and datasets and the proposed transformer-based baseline can be found at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='com/soeaver/awesome-human-parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2 Organization Figure 2 shows the outline of this survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' §2 gives some brief background on problem formulation and challenges (§2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1), human parsing taxonomy (§2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2), relevant tasks (§2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3), and applications of human parsing (§2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' §3 provides a detailed review of representative deep learning-based human parsing studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Frequently used datasets and performance comparisons are reviewed in §4 and §5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' An outlook for the future opportunities of human parsing is presented in §6, including a new transformer-based baseline (§6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1), several under-investigated open issues (§6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2) and new directions (§6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3) for future study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Conclusions will be drawn in §7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2 PRELIMINARIES 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 Problem Formulation and Challenges Formally, we use x to represent input human-centric data, y to represent pixel-level supervision target, X and Y to denote the space of input data and supervision target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Human parsing is to map data x to target y: X �→ Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The problem formulation is consistent with image segmentation [56], but X is limited to the human-centric space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Therefore, in many literatures, human parsing is regarded as fine- grained image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The central problem of human parsing is how to model human structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' As we all know, the human body presents a highly structured hierarchy, and all parts interact naturally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Most parsers hope to construct this interaction explicitly or implicitly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' However, the following challenges make the problem more complicated: Large Intra-class Variation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In human parsing, objects with large visual appearance gaps may share the same semantic categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' For example, “upper clothes” is an abstract con- cept without strict visual constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Many kinds of objects of color, texture, and shape belong to this category, leading to significant intra-class variations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Further challenges may be added by illumination changes, different viewpoints, noise corruption, low-image resolution, and filtering distortion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Large intra-class variations will increase the difficulty of classifier learning decision boundaries, resulting in semantic inconsistency in prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Unconstrained Poses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In the earlier human parsing bench- marks [1], [14], [25], [37], the data is usually collected from fashion media.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' From them people often stand or have a limited number of simple pose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' However, in the wild, human pose is unconstrained, showing great diversity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Therefore, more and more studies begin to pay attention to real-world human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Unconstrained poses will increase the state space of target geometrically, which brings great challenges to the human semantic representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Moreover, the left-right discrimination problem in human parsing is widespread (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', left-arm vs right-arm, left-leg vs right-leg), and it is also severely affected by unconstrained poses [44], [49], [57].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Occlusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Occlusion mainly presents two modes: (1) occlusion between humans and objects;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' (2) occlusion be- tween humans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The former will destroy the continuity of human parts or clothing, resulting in incomplete apparent information of the targets, forming local semantic loss, and IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 3 easily causing ambiguity [37], [39].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The latter is a more severe challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In addition to continuity destruction, it often causes foreground confusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In human parsing, only the occluded target human is regarded as the foreground, while the others are regarded as the background.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' However, they have similar appearance, making it difficult to determine which part belongs to the foreground [58].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Remark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In addition to the above challenges, some scenario- based challenges also hinder the progress of human parsing, such as the trade-off between inference efficiency and accu- racy in crowded scenes, motion blur, and camera position changes in movement scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2 Human Parsing Taxonomy According to the characteristics (number of humans, data modal) of the input space X , human parsing can be cate- gorized into three sub-tasks (see Figure 1): single human parsing, multiple human parsing, and video human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Single Human Parsing (SHP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' SHP is the cornerstone of human parsing, which assumes that there is only one foreground human instance in the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Therefore, y just contains corresponding semantic category supervision at the pixel-level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Simple and straightforward task definitions make most related research focus on how to model robust and generalized human parts relationship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In addition to being the cornerstone of human parsing, SHP is also often used as an auxiliary supervision for some tasks, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', person re-identification, human mesh reconstruction, virtual try-on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Multiple Human Parsing (MHP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Multiple human pars- ing, also known as instance-level human parsing, aims to parse multiple human instances in a single pass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Besides category information, y also provides instance supervision in pixel-level, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', the person identity of each pixel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The core problems of MHP are how to discriminate different human instances and how to learn each human feature in crowded scenes comprehensively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In addition, inference efficiency is also an important concern of MHP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ideally, inference should be real-time and independent of human instance numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Except as an independent task, MHP sometimes is jointed with other human visual understanding tasks in a multi-task learning manner, such as pose estimation [59], [60], dense pose [61] or panoptic segmentation [62].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Video Human Parsing (VHP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' VHP needs to parse every human in the video data, which can be regarded as a complex visual task integrating video segmentation and image-level human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The current VHP studies mainly adopt the unsupervised video object segmentation settings [63], i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', y is unknown in the training stage, and the ground- truth of the first frame is given in the inference stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The temporal correspondence will only be approximated according to x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Relative to SHP and MHP, VHP faces more challenges that are inevitable in video segmentation settings, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', motion blur and camera position changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Benefitting by the gradual popularity of video data, VHP has a wide range of application potential, and the typical cases are intelligent monitoring and video editing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Remark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Over recent years, some potential research di- rections have also received attention, including weakly- supervised human parsing [48], [51], [64], one-shot human parsing [65], [66] and interactive human parsing [67].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3 Relevant Tasks Among the research in computer vision, there are some tasks with strong relevance to human parsing, which are briefly described in the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Pose Estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The purpose of pose estimation is to locate human parts and build body representations (such as skeletons) from input data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Human parsing and pose estimation share the same input space X , but there are some differences in the supervision targets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The most crucial difference is that human parsing is a dense prediction task, which needs to predict the category of each pixel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Meanwhile, pose estimation is a sparse prediction task, only focusing on the location of a limited number of keypoints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' These two tasks are also often presented in multi-task learning, or one of them is used as a guiding condition for the other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' For example, human parsing as a guide can help pose estimation to reduce the impact of clothing on human appearance [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Image Segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Image segmentation is a fundamen- tal topic in image processing and computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' It mainly includes semantic segmentation and instance segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' As a basic visual task, there are many research directions can be regarded as branches, and human parsing is one of them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In the pre-deep learning era, image segmentation focuses on the continuity of color, texture, and edge, while human parsing pays more attention to the body topology modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In the deep learning era, the methods in two fields show more similarities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' However, more and more human parsing literature choose to model the parts relationship as the goal, which is significantly different from the general goal of image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Therefore, human parsing and image segmentation are closely related but independent problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Remark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ordinarily, most human-centric dense prediction task show positively relevance with human parsing, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', human matting [68], [69], human mesh reconstruction [70], [71] and face/hand parsing [72], [73].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='4 Applications of Human Parsing As a crucial task in computer vision, there are a large number of applications based on human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We will introduce some common ones below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dense Pose Estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The goal of dense pose estimation is to map all human pixels in an RGB image to the 3D surface of the human body [74].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Human parsing is an important pre- condition that can constrain the mapping of dense points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' At present, the mainstream dense pose estimation methods explicitly integrate human parsing supervision, such as DensePose R-CNN [74], Parsing R-CNN [61], and SimPose [75].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Therefore, the performance of human parsing will directly affect dense pose estimation results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Person Re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Person re-identification seeks to predict whether two images from different cameras belong to the same person.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The apparent characteristics of human body is an important factor affecting the accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Human parsing can provide pixel-level semantic information, helping re- identification models perceive the position and composition of human parts/clothing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Various studies have introduced human parsing explicitly or implicitly into re-identification methods, which improves the model performance in multiple aspects, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', local visual cues [76], [77], spatial alignment [78]– [80], background-bias elimination [81], domain adaptation [82], clothes changing [83], [84].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 4 2016 2017 (TMM2016) SYSU-Clothes 2018 2019 2022 2012~2015 (CVPR2017) LIP (ArXiv2017) MHP-v1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 (CVPR2018) COCO-DP (ECCV2018) CIHP (MM2018) VIP (MM2018) MHP-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(AAAI2016) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='AOG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2016) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2016) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='LG-LSTM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ECCV2016) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Graph-LSTM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ECCV2016) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HAZN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(TMM2016) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='SYSU-Clothes ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2017) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Struc-LSTM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2017) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='SSL ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2017) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Joint ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(BMVC2017) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Holistic ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(AAAI2018) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ProCNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(AAAI2018) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='AFLA ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2018) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='WSHP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(MM2018) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='TGPNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ECCV2018) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='MuLA ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ECCV2018) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='MMAN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(TPAMI2018) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='JPPNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ECCV2018) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='PGN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(MM2018) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ATEN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(AAAI2019) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CE2P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2019) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Graphonomy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ICCV2019) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CNIF ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ICCV2019) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='BSANet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ICCV2019) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='SPGNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(MM2019) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='BraidNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2019) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Parsing R-CNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(BMVC2019) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Unified ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2019) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='TimeCycle ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(NeurIPS2019) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='UVC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2012) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Fashionista ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(TMM2013) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CFPD ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ICCV2015) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Chictopia10k ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(TPAMI2015) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ATR ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2014) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='PPP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ICCV2013) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='PPSS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ICCV2013) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='DailyPhotos ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2012) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Yamaguchi ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ICCV2013) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='DMPM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ICCV2013) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='PaperDoll ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(TMM2013) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CFPD ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2014) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HPM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2015) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='M-CNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(TMM2015) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='FPVC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ICCV2015) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Co-CNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(TPAMI2015) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ATR ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2021 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPRW2021) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HRHP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(AAAI2020) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Grapy-ML ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2020) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HHP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2020) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='SLRS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2020) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CorrPM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ECCV2020) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='SemaTree ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ECCV2020) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='BGNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(TPAMI2020) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='SCHP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(IJCV2020) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='NAN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2020) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='PCNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(MM2020) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='DTCF ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ECCV2020) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='OCR ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(TPAMI2020) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HRNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ECCV2020) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='RP R-CNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(NeurIPS2020) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CRW ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2022) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CDGNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(TMM2022) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='PRM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(TPAMI2022) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='PADNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(TIP2022) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='AIParsing ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2022) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='LIIR ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2022) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='SCC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ArXiv2022) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='UVC+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(AAAI2021) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HIPN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(AAAI2021) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='POPNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ICCV2021) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='MCIBI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ICCV2021) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='NPPNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(TPAMI2021) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='PRHP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(AAAI2021) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ContrastCorr ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ICCV2021) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='VFS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ICCV2021) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ISNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(TPAMI2021) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HTCorrM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2021) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='MGHR ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(CVPR2021) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CLTC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='(ICCV2021) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='JSTG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Single Human Parsing (SHP) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Multiple Human Parsing (MHP) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Video Human Parsing (VHP) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3: Timeline of representative human parsing works from 2012 to 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The upper part represents the datasets of human parsing (§4), and the lower part represents the models of human parsing (§3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Virtual Try-on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Virtual try-on is a burgeoning and interest- ing application in the vision and graphic communities [85]– [92].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Most of the research follows the three processes: human parsing, appearance generation, and refinement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Therefore, human parsing is a necessary step to obtain clothing masks, appearance constraints and pose maintenance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Recently, some work began to study the parser-free virtual try-on [93]–[95].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Through teacher-student learning, parsing-based pre-training, and other technologies, the virtual try-on can be realized without the human parsing map during inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' However, most of these works still introduced the parsing results during training, and the generation quality retains gap from parser-based methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Conditional Human Image Generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Image gener- ation/synthesis as a field has seen a lot of progress in recent years [96]–[99].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Non-existent but fidelity images can be created in large quantities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Among them, human image generation has attracted attention because of its rich downstream applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Compared with unconditional gen- eration, conditional generation can produce corresponding output as needed, and human parsing map is one of the most widely used pre-conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' There have been a lot of excellent works on parsing-based conditional human image generation, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', CPFNet [100] and InsetGAN [101].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Remark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Besides the above cases, in general, most of the human-centric generation applications can be built with the help of human parsing, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', deepfakes [102], [103], style transfer [104]–[106], clothing editing [107]–[109].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 DEEP LEARNING BASED HUMAN PARSING The existing human parsing can be categorized into three sub-tasks: single human parsing, multiple human parsing, and video human parsing, focusing on parts relationship modeling, human instance discrimination, and temporal correspondence learning, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' According to this taxonomy, we sort out the representative works (lower part of Figure 3) and review them in detail below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 Single Human Parsing (SHP) Models SHP considers extracting human features through parts relationship modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' According to the modeling strategy, SHP models can be divided into three main classes: context learning, structured representation, and multi-task learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Moreover, considering some special but interesting methods, we will review them as “other modeling models”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Table 1 summarizes the characteristics for reviewed SHP models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 Context Learning Context learning, a mainstream paradigm for single human parsing, seeks to learn the connection between local and global features to model human parts relationship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Recent studies have developed various context learning methods to handle single human parsing, including attention mechanism and scale-aware features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Attention Mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The first initiative was proposed in [33] that applies an attention mechanism for parts relation- ship modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Specifically, soft weights, learned by attention mechanism, are used to weight different scale features and merge them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' At almost the same time, LG-LSTM [34], Graph- LSTM [113] and Struc-LSTM [115] exploit complex local and global context information through Long Short-Term Memory (LSTM) [132] and achieve very competitive results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Then, [36] proposes a Semantic Prediction Guidance (SPG) module that learns to re-weight the local features through the guidance from pixel-wise semantic prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' With the rise of graph model, researchers realized that attention mechanism is able to establish the correlation between graph model nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' For example, [121] introduces Graph IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 5 TABLE 1: Summary of essential characteristics for reviewed SHP models (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The training datasets and whether it is open source are also listed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' See §4 for more detailed descriptions of datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' These notes also apply to the other tables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Context Structured Multi-task Others Year Method Pub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Attention Scale-aware Tree Graph Edge Pose Denoising Adversarial ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Datasets ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Open Source ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2012 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Yamaguchi [1] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CVPR ✓ FS 2013 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='DMPM [14] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ICCV ✓ FS/DP PaperDoll [20] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ICCV ✓ FS CFPD [25] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='TMM ✓ CFPD 2014 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HPM [17] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CVPR ✓ ✓ FS/DP 2015 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='M-CNN [110] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CVPR ✓ ATR Co-CNN [37] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ICCV ✓ FS/ATR FPVC [111] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='TMM ✓ FS/DP ATR [22] TPAMI ✓ FS/DP 2016 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='AOG [112] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='AAAI ✓ ✓ Attention [33] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CVPR ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ PPP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='LG-LSTM [34] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CVPR ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ FS/ATR/PPP Graph-LSTM [113] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ECCV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ✓ FS/ATR/PPP HAZN [38] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ECCV ✓ PPP SYSU-Clothes [114] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='TMM ✓ SYSU-Clothes 2017 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Struc-LSTM [115] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CVPR ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ATR/PPP SSL [116] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CVPR ✓ LIP/PPP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Joint [117] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CVPR ✓ PPP 2018 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ProCNet [118] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='AAAI ✓ PPP AFLA [49] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='AAAI ✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='LIP WSHP [64] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CVPR ✓ PPP TGPNet [119] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='MM ✓ ATR ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='MuLA [47] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ECCV ✓ LIP/PPP MMAN [50] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ECCV ✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='LIP/PPP/PPSS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='JPPNet [2] TPAMI ✓ LIP/PPP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2019 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CE2P [44] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='AAAI ✓ ✓ LIP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Graphonomy [42] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CVPR ✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ATR/PPP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CNIF [3] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ICCV ✓ ATR/LIP/PPP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='BSANet [120] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ICCV ✓ ✓ PPP SPGNet [36] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ICCV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ PPP BraidNet [57] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='MM ✓ LIP 2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Grapy-ML [121] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='AAAI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ATR/PPP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HHP [4] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CVPR ✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ATR/LIP/PPP/PPSS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='SLRS [51] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CVPR ✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ✓ ATR/LIP PCNet [39] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CVPR ✓ ✓ LIP/PPP CorrPM [45] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CVPR ✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ATR/LIP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='DTCF [46] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='MM ✓ ✓ LIP/PPP SemaTree [41] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ECCV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ✓ LIP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='OCR [122] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ECCV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ LIP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='BGNet [123] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ECCV ✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ LIP/PPP/PPSS HRNet [124] TPAMI ✓ LIP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='SCHP [52] TPAMI ✓ ✓ ATR/LIP/PPP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2021 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HIPN [125] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='AAAI ✓ LIP/PPP POPNet [65] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='AAAI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ATR-OS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='MCIBI [126] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ICCV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ LIP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ISNet [127] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ICCV ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ LIP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='NPPNet [128] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ICCV ✓ ✓ LIP/PPP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HTCorrM [129] TPAMI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ATR/LIP PRHP [130] TPAMI ✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ATR/LIP/PPP/PPSS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2022 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CDGNet [131] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CVPR ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ✓ ATR/LIP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HSSN [5] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CVPR ✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ LIP/PPP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='✓ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='PRM [43] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='TMM ✓ LIP/PPP PADNet [48] TPAMI ✓ PPP Pyramid Mutual Learning (Grapy-ML) to address the cross- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='dataset human parsing problem,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' in which the self-attention is used to model the correlations between context nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Although attention mechanisms have achieved great results in previous work, global context dependency cannot be fully understood due to the lack of explicit prior supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' CDGNet [131] adopts the human parsing labels accumulated in the horizontal and vertical directions as the supervisions, aiming to learn the position distribution of human parts, and weighting them to the global features through attention mechanism to achieve accurate parts relationship modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Scale-aware Features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The most intuitive context learning method is to directly use scale-aware features (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' multi-scale features [133], [134], features pyramid networks [135], [136]), which has been widely verified in semantic segmentation [56].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The earliest effort can be tracked back to CoCNN [37].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' It integrates cross layer context, global image-level context, super-pixel context, and cross super-pixel neighborhood context into a unified architecture, which solves the obstacle of low-resolution features in FCN [31] for modeling parts relationship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Subsequently, [38] proposes Hierarchical Auto- Zoom Net (HAZN), which adaptively zooms predicted image regions into their proper scales to refine the parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' TGPNet [119] considers that the label fragmentation and complex annotation in human parsing datasets is a non- negligible problem to hinder accurate parts relationship modeling, trying to alleviate this limitation by supervising multi-scale context information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' PCNet [39] further studies the adaptive contextual features, and captures the represen- tative global context by mining the associated semantics of human parts through proposed part class module, relational aggregation module, and relational dispersion module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2 Structured Representation The purpose of structured representation is to learn the inherent combination or decomposition mode of human parts, so as to model parts relationship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Research efforts in this field are mainly made along two directions: using a tree structure to represent the hierarchical relationship between body and parts, and using a graph structure to represent the connectivity relationship between different parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' These two IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 6 ideas are complementary to each other, so they have often been adopted simultaneously in some recent work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tree Structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' DMPM [14] and HPM [17] solve the single human parsing issue by using the parselets representation, which construct a group of parsable segments by low-level over-segmentation algorithms, and represent these segments as leaf nodes, then search for the best graph configuration to obtain semantic human parsing results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Similarly, [22] formulates human parsing as an Active Template Regression (ATR) problem, where each human part is represented as the linear combination of learned mask templates and morphed to a more precise mask with the active shape parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Then the human parsing results are generated from the mask template coefficients and the active shape parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In the same line of work, ProCNet [118] deals with human parsing as a progressive recognition task, modeling structured parts relationship by locating the whole body and then segmenting hierarchical components gradually.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' CNIF [3] further extends the human tree structure and represents human body as a hierarchy of multi-level semantic parts, treating human parsing as a multi-source information fusion process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' A more efficient solution is developed in [41], which uses a tree structure to encode human physiological composition, then designs a coarse to fine process in a cascade manner to generate accurate parsing results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Graph Structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Graph structure is an excellent re- lationship modeling method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Some researchers consider introducing it into human parsing networks for part-relation reasoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' A clothing co-parsing system is designed by [114], which takes the segmented regions as the vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' It incorporates several contexts of clothing configuration to build a multi-image graphical model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' To address the cross-dataset human parsing problem, Graphonomy [42] proposes a universal human parsing agent, introducing hierarchical graph transfer learning to encode the underlying label semantic elements and propagate relevant semantic information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' BGNet [123] hopes to improve the accuracy of human parsing in similar or cluttered scenes through graph structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' It exploits the human inherent hierarchical structure and the relationship between different human parts employing grammar rules in both cascaded and paralleled manner to correct the segmentation performance of easily confused human parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' A landmark work on this line was proposed by Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' [4], [130].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' A hierarchical human parser (HHP) is constructed, representing the hierarchical human structure by three kinds of part relations: decomposi- tion, composition, and dependency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Besides, HHP uses the prism of a message-passing, feed-back inference scheme to reason the human structure effectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Following this idea, [43] proposes Part-aware Relation Modeling (PRM) to handle human parsing, generating features with adaptive context for various sizes and shapes of human parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3 Multi-task Learning The auxiliary supervisions can help the parser better under- stand the relationship between parts, such as part edges or human pose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Therefore, multi-task learning has become an essential paradigm for single human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Edge-aware Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Edge information is implicit in the human parsing dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Thus edge-aware supervision or feature can be introduced into the human parser without additional labeling costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In particular, edge-aware learning can enhance the model’s ability to discriminate adjacent parts and improve the fineness of part boundaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The typical work is [44], which proposes a Context Embedding with Edge Perceiving (CE2P) framework, using an edge perceiving module to integrate the characteristic of object contour to refine the part boundaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Because of its excellent performance and scalability, CE2P has become the baseline for many subsequent works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' CorrPM [45] and HTCorrM [129] are built on CE2P, and further use part edges to help model the parts relationship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' They construct a heterogeneous non-local module to mix the edge, pose and semantic features into a hybrid representation, and explore the spatial affinity between the hybrid representation and the parsing feature map at all positions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' BSANet [120] considers that edge information is helpful to eliminate the part-level ambiguities and proposes a joint parsing framework with boundary and semantic awareness to address this issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Specifically, a boundary-aware module is employed to make intermediate- level features focus on part boundaries for accurate localiza- tion, which is then fused with high-level features for efficient part recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' To further enrich the edge-aware features, a dual-task cascaded framework (DTCF) is developed in [46], which implicitly integrates parsing and edge features to refine the human parsing results progressively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Pose-aware Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Both human parsing and pose estimation seek to predict dense and structured human representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' There is a high intrinsic relationship between them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Therefore, some studies have tried to use pose-aware learning to assist in parts relationship modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' As early as 2012, Yamaguchi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' [1], [20] exploited the relationship between clothing and the underlying body pose, exploring techniques to accurately parse person wearing clothing into their constituent garment pieces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Almost immediately, Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' [25] combined the human pose estimation module with an MRF-based color/category inference module and a super-pixel category classifier module to parse fashion items in images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Subsequently, Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' [111] extends this idea to semi-supervised human parsing, collecting a large number of unlabeled videos, using cross-frame context for human pose co-estimation, and then performs video joint human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' SSL [116] and JPPNet [2] choose to impose human pose structures into parsing results without resorting to extra supervision, and adopt the multi-task learning manner to explore efficient human parts relationship modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' A similar work is developed by [47], which presents a Mutual Learning to Adapt model (MuLA) for joint human parsing and pose estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' MuLA can fast adjust the parsing and pose models to provide more robust and accurate results by incorporating information from corresponding models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Different from the above work, Zeng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' [128].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' focus on how to automatically design a unified model and perform two tasks simultaneously to benefit each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Inspired by NAS [137], they propose to search for an efficient network architecture (NPPNet), searching the encoder-decoder architectures respectively, and embed NAS units in both multi-scale feature interaction and high-level feature fusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' To get rid of annotating pixel-wise human parts masks, a weakly-supervised human parsing approach is proposed by PADNet [48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' They develop an iterative training framework to transform pose knowledge into part priors, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 7 TABLE 2: Highlights of parts relationship modeling methods for SHP models (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Representative Works of each method are also give.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Method Representative Works Highlights Attention [33], [34], [113] [36], [115], [121] [131] It is helpful to locate interested human parts, suppress useless background information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Scale-aware [37], [38], [119] [39] Fusion low-level texture and high-level semantic features, help to parse small human parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tree [14], [17], [22] [3], [41], [118] Simulate the composition and decomposition relationship between human parts and body.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Graph [42], [114], [123] [4], [43], [130] Modeling the correlation and difference between human parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Edge [44], [45], [129] [46], [120] Solve the pixel confusion problem on the boundary of adjacent parts, generating finer boundary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Pose [1], [20], [25] [2], [111], [116] [47], [48], [128] As context clues to improve semantic consistency between parsing results and body structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Denoising [51], [52], [125] Alleviate the impact of super-pixel or annotation errors, improving the robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Adversarial [49], [50] Reduce the domain differences between training data and testing data, improving the generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' so that only pose annotations are required during training, greatly alleviating the annotation burdens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='4 Other Modeling Models Other works attempt to employ techniques outside of the above taxonomy, such as denoising and adversarial learning, which also make specific contributions to the human parts relationship modeling and deserve a separate look.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Denoising.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' To reduce the labeling cost, there is a large amount of noise in the mainstream SHP datasets [22], [116], so denoising learning for accurate human parts relationship modeling has also received some attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' SCHP [52] is the most representative work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' It starts with using inaccurate parsing labels as the initialization and designs a cyclically learning scheduler to infer more reliable pseudo labels In the same period, Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' [51] attempt to combine denoising learn- ing and semi-supervised learning, proposing Self-Learning with Rectification (SLR) strategy for human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' SLR generates pseudo labels for unlabeled data to retrain the parsing model and introduces a trainable graph reasoning method to correct typical errors in pseudo labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Based on SLR, HIPN [125] further explores to combine denoising learning with semi-supervised learning, which develops the noise-tolerant hybrid learning, taking advantage of positive and negative learning to better handle noisy pseudo labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Adversarial Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Earlier, inspired by the Generative Adversarial Nets (GAN) [96], a few works use adversarial learning to solve problems in parts relationship modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' For example, to solve the domain adaptation problem, AFLA [49] proposes a cross-domain human parsing network, introducing a discriminative feature adversarial network and a structured label adversarial network to eliminate cross- domain differences in visual appearance and environment conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' MMAN [50] hopes to solve the problem of low-level local and high-level semantic inconsistency in pixel-wise classification loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' It contains two discriminators: Macro D, acting on low-resolution label map and penalizing semantic inconsistency;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Micro D, focusing on high-resolution label map and restraining local inconsistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Remark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In fact, many single human parsing models use a variety of parts relationship modeling methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Therefore, our above taxonomy only introduces the core methods of TABLE 3: Summary of essential characteristics for reviewed MHP models (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' “BU” indicates bottom-up;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' “1S-TD” in- dicates one-stage top-down;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' “2S-TD” indicates two-stage top- down.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Year Method Pub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Pipeline Datasets Open Source 2017 Holistic [138] BMVC 1S-TD PPP 2018 PGN [8] ECCV BU PPP/CIHP ✓ 2019 CE2P [44] AAAI 2S-TD CIHP/MHP-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 ✓ Parsing R-CNN [61] CVPR 1S-TD CIHP/MHP-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 ✓ BraidNet [57] MM 2S-TD CIHP Unified [139] BMVC 1S-TD PPP/CIHP 2020 RP R-CNN [140] ECCV 1S-TD CIHP/MHP-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 ✓ SemaTree [41] ECCV 2S-TD CIHP/MHP-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 ✓ NAN [141] IJCV BU MHP-v1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0/MHP-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 ✓ SCHP [52] TPAMI 2S-TD CIHP/MHP-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0/VIP ✓ 2021 MGHR [59] CVPR BU PPP/MHP-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 /COCO-DP ✓ 2022 AIParsing [142] TIP 1S-TD CIHP/MHP-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0/VIP each model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Table 2 summarizes the highlights of each parts relationship modeling method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2 Multiple Human Parsing (MHP) Models MHP seeks to locate and parse each human in the image plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The task setting is similar to instance segmentation, so it is also called instance-level human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We divide MHP into three paradigms: bottom-up, one-stage top-down, and two-stage top-down, according to its pipeline of dis- criminating human instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The essential characteristics of reviewed MHP models are illustrated in Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Bottom-up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Bottom-up paradigm regards multiple human parsing as a fine-grained semantic segmentation task, which predicts the category of each pixel and grouping them into corresponding human instance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In a seminal work [8], Gong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' propose a detection-free Part Grouping Network (PGN) that reformulates multiple human parsing as two twinned sub-tasks (semantic part segmentation and instance-aware edge detection) that can be jointly learned and mutually refined via a unified network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Among them, instance-aware edge detection task can group semantic parts into distinct human instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Then, NAN [141] proposes a deep Nested Adversarial Network for multiple human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' NAN consists of three GAN-like sub-nets, performing semantic saliency prediction, instance-agnostic parsing, and instance- aware clustering, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Recently, Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' [59] propose a new bottom-up regime to learn category-level multiple human parsing as well as pose estimation in a joint and end-to-end manner, called Multi-Granularity Human Representation (MGHR) learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' MGHR exploits structural information over different human granularities, transforming the difficult pixel grouping problem into an easier multi human joint assembling task to simplify the difficulty of human instances discrimination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' One-stage Top-down.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' One-stage top-down is the main- stream paradigm of multiple human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' It first locates each human instance in the image plane, then segments each human part in an end-to-end manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' An early attempt is Holistic [138], which consists of a human detection network and a part semantic segmentation network, then passing the results of both networks to an instance CRF [143] to perform multiple human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Inspired by Mask R-CNN [144], Qin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' [139] propose a top-down unified framework that simultaneously performs human detection and single human parsing, identifying instances and parsing human parts in crowded scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' A milestone one-stage top-down IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 8 TABLE 4: Highlights of human instances discrimination meth- ods for MHP models (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Representative Works of each method are also give.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Method Representative Works Highlights Bottom-up [8], [59], [141] Good model efficiency, good accuracy on pixel-wise segmentation, and poor accuracy on instances discrimination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' One-stage Top-down [61], [138], [139] [140], [142] Better trade-off between model efficiency and accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' But pixel-wise segmentation, especially the part boundary is not fine enough.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Two-stage Top-down [41], [44], [57] [52] Good accuracy and poor efficiency, the model inference time is proportional to human instances number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' multiple human parsing model is proposed by Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', that enhances Mask R-CNN in all aspects, and proposes Parsing R-CNN [61] network, greatly improving the accuracy of multiple human parsing concisely.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Subsequently, Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' propose an improved version of Parsing R-CNN, called RP R-CNN [140], which introduces a global semantic enhanced feature pyramid network and a parsing re-scoring network into the high-performance pipeline, achieving better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Later, AIParsing [142] introduces the anchor- free detector [145] into the one-stage top-down paradigm for discriminating human instances, avoiding the hyper- parameters sensitivity caused by anchors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Two-stage Top-down.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' One-stage top-down and two-stage top-down paradigms are basically the same in operation flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The difference between them is whether the detector is trained together with the segmentation sub-network in an end-to-end manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' All the two-stage bottom-up multiple human parsing methods consist of a human detector and a single human parser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The earliest attempt is CE2P [44], which designs a framework called M-CE2P on CE2P and Mask R-CNN, cropping the detected human instances, then sending them to the single human parser, finally combining the parsing results of all instances into a multiple human parsing prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Subsequent works, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', BraidNet [57], SemaTree [41], and SCHP [52], basically inherit this pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Remark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The advantage of bottom-up and one-stage top- down is efficiency, and the advantage of two-stage top-down is accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' But as a non-end-to-end pipeline, the inference speed of two-stage top-down is positively correlated with the number of human instances, which also limits its practical application value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The detailed highlights of three human instances discrimination methods are summarized in Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3 Video Human Parsing (VHP) Models Existing VHP studies mainly focus to propagate the first frame into the entire video by the affinity matrix, which rep- resents the temporal correspondences learnt from raw video data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Considering the unsupervised learning paradigms, we can group them into three classes: cycle-tracking, reconstruc- tive learning, and contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We summarize the essential characteristics of reviewed VHP models in Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Cycle-tracking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Early VHP methods model the unsuper- vised learning target mainly by the cycle-consistency of video frames, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', pixels/patches are expected to fall into the same locations after a cycle of forward-backward tracking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' ATEN [9] first leverages convolutional gated recurrent units to encode temporal feature-level changes, optical flow of non-key frames is wrapped with the temporal memory to generate their features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' TimeCycle [146] tracks the reference patch backward-forward in the video.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The reference and the TABLE 5: Summary of essential characteristics for reviewed VHP models (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' “Cycle.” indicates cycle-tracking;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' “Recons.” indicates reconstructive learning;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' “Contra.” indicates contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' All models are test on the VIP dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Year Method Pub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Cycle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Recons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Contra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Open Source 2018 ATEN [9] MM ✓ ✓ 2019 TimeCycle [146] CVPR ✓ ✓ UVC [147] NeurIPS ✓ ✓ ✓ 2020 CRW [148] NeurIPS ✓ ✓ 2021 ContrastCorr [149] AAAI ✓ ✓ ✓ CLTC [150] CVPR ✓ VFS [151] ICCV ✓ ✓ JSTG [152] ICCV ✓ ✓ 2022 LIIR [153] CVPR ✓ ✓ SCC [154] CVPR ✓ ✓ UVC+ [155] ArXiv ✓ ✓ ✓ TABLE 6: Highlights of temporal correspondences learning methods for VHP models (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Representative Works of each method are also give.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Method Representative Works Highlights Cycle- tracking [146], [147] [148], [155] Capturing temporal variations, may produce wrong correspondences when occlusion occurs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Reconstructive Learning [149], [153] Modelling fine-grained temporal correspondence and guiding focus on part details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Contrastive Learning [150], [151] [152], [154] Search for discriminative features to segment similar or position-transformed human instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' tracked patch at the end of the tracking cycle are considered to be consistent both in spatial coordinates and feature representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Meanwhile, UVC [147] performs the region- level tracking and pixel-level corresponding with a shared affinity matrix, the tracked patch feature and the region- corresponding sub-affinity matrix are used to reconstruct the reference patch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Roles of the target and reference patches are then switched to regularizing the affinity matrix as orthogonal, which satisfies the cycle-consistency constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Its later version, UVC+ [155] combines features learned by image-based tasks with video-based counterparts to further boost the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lately, CRW [148] represents video as a graph, where nodes are patches and edges are affinities between nodes in adjacent frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' A cross-entropy loss guides a graph walk to track the initial node bi-directionally in feature space, which is considered the target node after a bunch of cycle paths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' However, the cycle-consistency in [146], [148] strictly assumes that the target patch preserves visible in consecutive frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Once it is occluded or disappears, the correspondences will be incorrectly assigned, thus leaving an optimal transport problem between video frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Reconstructive Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' As video contents smoothly shift in time, pixels in a “query” frame can be considered as copies from a set of pixels in other reference frames [156], [157].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Following UVC [147] to establish pixel-level correspondence, several methods [149], [153] are proposed to learn temporal correspondence completely by reconstructing correlating frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Subsequently, ContrastCorr [149] not only learns from intra-video self-supervision, but also steps further to introduce inter-video transformation as negative correspondence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The inter-video distinction enforces the feature extractor to learn discriminations between videos while preserving the fine-grained matching characteristic among intra-video frame pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Based on the intra-inter video correlation,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' LIIR [153] introduces a locality-aware reconstruc- tion framework,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' which encodes position information and involves spatial compactness into intra-video correspondence IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 9 Sin gle H um an P ars in g ( SH P) Mu lti ple H um an P ars in g ( MH P) Vid eo H um an P ars in g ( VH P) At te nt io n Sc al e- aw ar e Tr ee Gr ap h Ed ge Po se De no s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ad ve r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' BU 1S T D 2S T D Cy cle .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Re co ns .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Co nt ra .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4: Correlations of different SHP, MHP and VHP methods (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We use the connections between the arc edges to sum- mary the correlation between human parsing methods, each connecting line stands for a study that uses both methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The longer the arc, the more methods of this kind, same for the width of connecting lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' This correlation summary reveals the prevalence of various human parsing methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' learning, for locality-aware and efficient visual tracking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Contrastive Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Following the idea of pulling positive pairs close together and pushing negative pairs away from each other, considerable VHP algorithms adopt contrastive learning as a training objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' To solve the optimal transport problem, CLTC [150] proposes to mine positive and semi-hard negative correspondences via con- sistency estimation and dynamic hardness discrimination, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The well-defined positive and negative pixel pairs prevent side-effects from non-consistent positive and too hard/easy negative samples to contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Specifically, unlike most methods that perform patch-level contrastive learning, VFS [151] learns visual correspondences at frame level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Following data augmentation of image-level contrastive learning [158] and a well-designed temporal sampling strategy, VFS encourages convolutional features to find correspondences between similar objects and parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lately, [152], [154] extend the video graph with space relations of neighbor nodes, which determine the aggre- gation strength from intra-frame neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The proposed space-time graph draws more attention to the association of center-neighbor pairs, thus explicitly helping learning correspondence between part instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' SCC [154] mixes sequential Bayesian filters to formulate the optimal paths that track nodes from one frame to others, to alleviate the correspondence missing caused by random occlusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Remark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' To our investigation scope, the current VHP re- search essentially follows an unsupervised semi-automatic video object segmentation setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' But considering the po- tential demand, it is more expectant to fully utilize the annotations and solve the VHP problem through an instance- discriminative manner, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=',' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' a fine-grained video instance seg- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Sin ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='gle ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='H ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='um ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='an ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ars ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='in ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g ( ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='SH ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='P) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Mu ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='lti ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ple ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='H ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='um ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='an ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ars ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='in ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g ( ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='MH ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='P) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Vid ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='eo ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='H ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='um ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='an ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ars ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='in ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g ( ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='VH ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='P) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='AOG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ATR ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='FPVC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Co-CNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='M-CNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HPM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CFPD ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='DMPM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='PaperDoll ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Yamaguchi ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='LG-LSTM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Graph-LSTM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HAZN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='SYSU-Clothes ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Struc-LSTM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='SSL ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Joint ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ProCNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='AFLA ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='WSHP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='TGPNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='MuLA ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='MMAN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='JPPNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CE2P ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Graphonomy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CNIF ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='BSANet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='SPGNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='BraidNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Grapy-ML ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HHP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='SLRS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='PCNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CorrPM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='DTCF ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='SemaTree ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='OCR ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='BGNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HRNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='SCHP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HIPN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='POPNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='MCIBI ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ISNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='NPPNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='HTCorrM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='PRHP ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CDGNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='PRM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='PADNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Holistic ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='PGN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Parsing R-CNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Unified ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='RP R-CNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='NAN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='MGHR ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='AIParsing ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ATEN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='TimeCycle ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='UVC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Crw ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='ContrastCorr ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='CLTC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='VFS ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='JSTG ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='LIIR ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='SCC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='UVC+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5: Correlations of different SHP, MHP and VHP studies (§3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We list out all the involved human parsing studies by dots and use connecting lines to represent their citing relations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The citing relation here refers to the citation appears in experimental comparisons, to avoid citations of low correlation in background introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' As each line represents a citation between two studies, so the larger the dot, the more times cited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' These correlations highlight the relatively prominent studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' mentation task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The highlights of temporal correspondences learning methods for VHP are shown in Table 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='4 Summary Through the detailed review, we have subdivided SHP, MHP, and VHP studies into multiple methods and discussed their characteristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' To further investigate the development picture of the human parsing community, we summarize the correlations of the methods in Figure 4 and correlations of the involved studies in Figure 5, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Figure 4 presents correlations between research methods, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', two methods are connected if a study uses both as its technical components, making the length of arcs represent the number of studies using them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The connecting line distribution first obviously shows that Graph (Structure), Attention (Mechanism), and Edge(-aware Learning) of SHP are more correlated with multiple other methods, which indicates their compatibility with others and prevalence in the community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' It is worth noting that though Tree (Structure) has many correlations with others, a large proportion of them are with Graph method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' This phenomenon indicates that Tree method is much less generalizable compared to Graph, Attention, and Edge methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Regrettably, negligible relations between VHP and other methods show that current VHP studies have not yet gone deep into parts relationship modeling or human instance discrimination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The correlations of human parsing studies are presented in form of citing relations as Figure 5, each line represents a citation between two studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' For reliable statistics, we only consider citations that appear in experimental comparisons for all studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' From the citing relations, we can easily observe IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 10 that Attention [33], JPPNet [2], CE2P [44], CNIF [3] and PGN [8] have the largest dots, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', they are experimental compared by most other studies, this indicates they are recognized as baseline studies of great prominence by the community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Additionally, since CE2P proposed to handle MHP sub-task by 2S-TD pipeline and make a milestone, lots of SHP studies start to compare their algorithms with MHP studies, this trend breaks down the barriers between the two sub-tasks of human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lastly, similar to the method correlation, VHP studies form citations strictly along with the proposed order among their own, which once again shows that VHP studies have not focused on human-centric data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Synthesizing detailed review and correlation analysis, we can draw some conclusions about the historical evolution of human parsing models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' First, the research focus has gradually shifted from SHP to MHP and VHP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' As more challenging tasks, the latter two also have greater application potential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' With the emergence of high-quality annotated datasets and the improvement of computing power, they have received increasing attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Secondly, the technical di- versity is insufficient, and the achievements of representation learning in recent years have not fully benefited the human parsing field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Finally, the number of open source work has increased significantly, but still insufficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' It is hoped that subsequent researchers will open source code and models as much as possible to benefit the follow-up researchers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 HUMAN PARSING DATASETS In the past decades, a variety of visual datasets have been released for human parsing (upper part of Figure 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We summarize the classical and commonly used datasets in Table 7, and give a detailed review from multiple angles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 Single Human Parsing (SHP) Datasets Fashionista (FS) [1] consists of 685 photographs collected from Chictopia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='com, a social networking website for fash- ion bloggers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' There are 456 training images and 299 testing images annotated with 56-class semantic labels, and text tags of garment items and styling are also provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Fashionista was once the main single human/clothing parsing dataset but was limited by its scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' It is rarely used now.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Colorful Fashion Parsing Data (CFPD) [25] is also col- lected from Chictopia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='com, which provides 23-class noisy semantic labels and 13-class color labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The annotated im- ages are usually grouped into 1,341/1,341 for train/test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' DailyPhotos (DP) [14] contains 2,500 high resolution images, which are crawled following the same strategy as the Fashionista dataset and thoroughly annotated with 19 categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' PPSS [159] includes 3,673 annotated samples collected from 171 videos of different surveillance scenes and provides pixel- wise annotations for hair, face, upper-/lower-clothes, arm, and leg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' It presents diverse real-word challenges, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' pose variations, illumination changes, and occlusions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' There are 1,781 and 1,892 images for training and testing, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' ATR [22] contains data which combined from three small benchmark datasets: the Fashionista [1] containing 685 images, the CFPD [25] containing 2,682 images, and the DailyPhotos [14] containing 2,500 images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The labels are merged of Fashionista and CFPD datasets to 18 categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' To enlarge the diversity, another 1,833 challenging images are collected and annotated to construct the Human Parsing in the Wild (HPW) dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The final combined dataset contains 7,700 images, which consists of 6,000 images for training, 1,000 for testing, and 700 as the validation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chictopia10k [37] contains 10,000 real-world human pic- tures from Chictopia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='com, annotating pixel-wise labels following [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The dataset mainly contains images in the wild (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', more challenging poses, occlusion, and clothes).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' SYSU-Clothes [114] consists of 2,098 high resolution fashion photos in high-resolution (about 800×500 on average) from the shopping website.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In this dataset, six categories of clothing attributes (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', clothing category, clothing color, clothing length, clothing shape, collar shape, and sleeve length) and 124 attribute types of all categories are collected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Look into Person (LIP) [116] is the most popular single human parsing dataset, which is annotated with pixel-wise annotations with 19 semantic human part labels and one background label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' LIP contains 50,462 annotated images and be grouped into 30,462/10,000/10,000 for train/val/test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The images in the LIP dataset are cropped person instances from COCO [163] training and validation sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Remark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' ATR and LIP are the mainstream benchmarks among these single human parsing datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In recent years, the research purpose has changed from “clothing” to “human”, and the data scale and annotation quality have also been significantly improved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2 Multiple Human Parsing (MHP) Datasets PASCAL-Person-Part (PPP) [160] is annotated from the PASCAL-VOC-2010 [164], which contains 3,533 multi-person images with challenging poses and splits into 1,716 training images and 1,817 test images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Each image is pixel-wise annotated with 7 classes, namely head, torso, upper/lower arms, upper/lower legs, and a background category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' MHP-v1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 [161] contains 4,980 multi-person images with fine-grained annotations at pixel-level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' For each person, it defines 7 body parts, 11 clothing/accessory categories, and one background label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The train/val/test sets contain 3,000/1,000/980 images, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' MHP-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 [162] is an extend version of MHP-v1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 [161], which provides more images and richer categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' MHP- v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 contains 25,403 images and has great diversity in image resolution (from 85×100 to 4,511×6,919) and human instance number (from 2 to 26 persons).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' These images are split into 15,403/5,000/5,000 for train/val/test with 59 categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' COCO-DensePose (COCO-DP) [74] aims at establishing the mapping between all human pixels of an RGB image and the 3D surface of the human body, and has 27,659 images (26,151/1,508 for train/test splits) gathered from COCO [163].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The dataset provides 15 pixel-wise human parts with dense keypoints annotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Crowd Instance-level Human Parsing (CIHP) [8] is the largest multiple human parsing dataset to date.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' With 38,280 diverse real-world images, the persons are labelled with pixel-wise annotations on 20 categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' It consists of 28,280 training and 5,000 validation images with publicly available annotations, as well as 5,000 test images with annotations withheld for benchmarking purposes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' All images of the CIHP dataset contain two or more instances with an average of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 11 TABLE 7: Statistics of existing human parsing datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' See §4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 - §4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3 for more detailed descriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The 14 datasets are divided into 3 groups according to the human parsing taxonomy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' “Instance” indicates that instance-level human labels are provided;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' “Temporal” indicates that video-level labels are provided;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' “Super-pixel” indicates that super-pixels are used for labeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dataset Year Pub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' #Images #Train/Val/Test/ #Class Purpose Instance Temporal Super-pixel Other Annotations Fashionista [1] 2012 CVPR 685 456/-/299 56 Clothing ✓ Clothing-tag CFPD [25] 2013 TMM 2,682 1,341/-/1,341 23 Clothing ✓ Color-seg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' DailyPhotos [14] 2013 ICCV 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='500 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='500/-/- 19 Clothing ✓ Clothing-tag PPSS [159] 2013 ICCV 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='673 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='781/-/1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='892 6 Human ATR [22] 2015 TPAMI 7,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='700 6,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='000/700/1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='000 18 Human Chictopia10k [37] 2015 ICCV 10,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='000 10,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='000/-/- 18 Clothing Clothing-tag SYSU-Clothes [114] 2016 TMM 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='682 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='682/-/- 57 Clothing ✓ Clothing-tag LIP [116] 2017 CVPR 50,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='462 30,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='462/10,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='000/10,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='000 20 Human ✓ HRHP [7] 2021 CVPRW 7,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='500 6,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='000/500/1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='000 20 Human PASCAL-Person-Part [160] 2014 CVPR 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='533 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='716/-/1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='817 7 Human ✓ Human-box MHP-v1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 [161] 2017 ArXiv 4,980 3,000/1,000/980 19 Human ✓ Human-box MHP-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 [162] 2018 MM 25,403 15,403/5,000/5,000 59 Human ✓ Human-box COCO-DensePose [74] 2018 CVPR 27,659 26,151/-/1,508 15 Human ✓ Human-box/ keypoints/densepoints CIHP [8] 2018 ECCV 38,280 28,280/5,000/5,000 20 Human ✓ Human-box VIP [9] 2018 MM 21,246 18,468/-/2,778 20 Human ✓ ✓ Human-box/identity Remark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' So far, several multiple human parsing datasets have high-quality annotation and considerable data scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In addition to pixel-wise parsing annotations, many datasets provide other rich annotations, such as box, key- points/landmark and style.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' PPP, CIHP and MHP-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 are widely studied datasets, and most classical multiple human parsing methods have been verified on them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3 Video Human Parsing (VHP) Datasets Video Instance-level Parsing (VIP) [9] is the first video human parsing dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' VIP contains 404 multi-person Full HD sequences, which are collected from Youtube with great diversity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' For every 25 consecutive frames in each sequence, one frame is densely annotated with 20 classes and identities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' All the sequences are grouped into 354/50 for train/test, containing 18,468/2,778 annotated frames respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Remark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Since video human parsing has only attracted attention in recent years, there are few publicly available datasets, and its data scale and richness still need to be continuously invested by the community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='4 Summary Through Table 7, we can observe that the human parsing datasets show several development trends.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Firstly, the scale of datasets continues to increase, from hundreds in the early years [1] to a tens of thousands now [8], [116].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Sec- ondly, the quality of annotation is constantly improving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Some early datasets use super-pixel [1], [114], [116] to reduce the annotation cost, while in recent years, pixel- wise accurate annotation has been adopted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Finally, the annotation dimensions are becoming increasingly diverse, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', COCO-DensePose [74] provides boxes, keypoints, and UVs annotation in addition to parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5 PERFORMANCE COMPARISONS To provide a more intuitive comparison, we tabulate the performance of several previously discussed models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' It should be noted that the experimental settings of each study are not entirely consistent (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', backbone, input size, training epochs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Therefore, we suggest only taking these comparisons as references, and a more specific analysis needs to study the original articles deeply.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' TABLE 8: Quantitative SHP results on ATR test (§5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1) in terms of pixel accuracy (pixAcc), foreground pixel accuracy (FGAcc) and F-1 score (F-1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The three best scores are marked in red, blue, and green, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Year Method Pub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Backbone #Input Size #Epoch pixAcc FGAcc F-1 2012 Yamaguchi [1] CVPR 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='38 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='59 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='80 2013 Paperdoll [20] ICCV 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='96 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='18 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='76 2015 M-CNN [110] CVPR 50 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='57 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='98 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='81 Co-CNN [37] ICCV 150×100 90 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='23 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='90 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='95 ATR [22] TPAMI 227×227 120 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='11 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='04 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='38 2016 LG-LSTM [34] CVPR VGG16 321×321 60 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='18 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='79 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='97 Graph-LSTM [113] ECCV VGG16 321×321 60 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='60 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='42 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='76 2017 Struc-LSTM [115] CVPR VGG16 321×321 60 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='71 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='76 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='88 2018 TGPNet [119] MM VGG16 321×321 35 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='45 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='91 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='76 2019 CNIF [3] ICCV ResNet101 473×473 150 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='26 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='91 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='51 2020 CorrPM [45] CVPR ResNet101 384×384 150 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='12 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='40 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='12 HHP [4] CVPR ResNet101 473×473 150 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='84 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='23 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='25 SCHP [52] TPAMI ResNet101 473×473 150 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='25 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='97 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='55 2022 CDGNet [131] CVPR ResNet101 512×512 250 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='39 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='19 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='16 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 SHP Performance Benchmarking We select ATR [22] and LIP [116] as the benchmark for single human parsing performance comparison, and compared 14 and 26 models, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 Evaluation Metrics The evaluation metrics of single human parsing are basically consistent with semantic segmentation [31], including pixel accuracy, mean pixel accuracy, and mean IoU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In addition, foreground pixel accuracy and F-1 score are also commonly used metrics on the ATR dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Pixel accuracy (pixAcc) is the simplest and intuitive metric, which expresses the proportion of pixels with correct prediction in the overall pixel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Foreground pixel accuracy (FGAcc) only calculates the pixel accuracy of foreground human parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Mean pixel accuracy (meanAcc) is a simple improvement of pixel accuracy, which calculates the proportion of correctly predicted pixels in each category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Mean IoU (mIoU) is short for mean intersection over union, which calculates the ratio of the intersection and union of two sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The two sets are the ground-truth and predicted results of each category respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' F-1 score (F-1) is the harmonic average of precision and recall, which is a common evaluation metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2 Results Table 8 presents the performance of the reviewed SHP methods on ATR test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Struc-LSTM [115] achieves the best performance, scoring 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='71% pixAcc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' and 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='88% F-1 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 12 TABLE 9: Quantitative SHP results on LIP val (§5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1) in terms of pixel accuracy (pixAcc), mean pixel accuracy (meanAcc) and mean IoU (mIoU).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The three best scores are marked in red, blue, and green, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Year Method Pub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Backbone #Input Size #Epoch pixAcc meanAcc mIoU 2017 SSL [116] CVPR VGG16 321×321 50 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='19 2018 HSP-PRI [76] CVPR InceptionV3 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='07 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='54 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='16 MMAN [50] ECCV ResNet101 256×256 30 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='24 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='60 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='93 MuLA [47] ECCV Hourglass 256×256 250 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='50 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='50 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='30 JPPNet [2] TPAMI ResNet101 384×384 60 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='39 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='32 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='37 2019 CE2P [44] AAAI ResNet101 473×473 150 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='37 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='20 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='10 CNIF [3] ICCV ResNet101 473×473 150 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='03 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='80 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='74 BraidNet [57] MM ResNet101 384×384 150 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='60 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='09 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='42 2020 CorrPM [45] CVPR ResNet101 384×384 150 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='33 SLRS [51] CVPR ResNet101 384×384 150 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='33 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='53 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='34 PCNet [39] CVPR ResNet101 473×473 120 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='03 HHP [4] CVPR ResNet101 473×473 150 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='05 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='58 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='25 DTCF [46] MM ResNet101 473×473 200 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='61 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='89 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='82 SemaTree [41] ECCV ResNet101 384×384 200 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='05 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='42 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='73 OCR [122] ECCV HRNetW48 473×473 ∼100 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='65 BGNet [123] ECCV ResNet101 473×473 120 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='82 HRNet [124] TPAMI HRNetW48 473×473 ∼150 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='21 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='43 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='90 SCHP [52] TPAMI ResNet101 473×473 150 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='36 2021 HIPN [125] AAAI ResNet101 473×473 150 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='14 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='09 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='61 MCIBI [126] ICCV ResNet101 473×473 150 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='42 ISNet [127] ICCV ResNet101 473×473 160 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='96 NPPNet [128] ICCV NAS 384×384 120 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='56 HTCorrM [129] TPAMI HRNetW48 384×384 180 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='85 2022 CDGNet [131] CVPR ResNet101 473×473 150 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='86 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='49 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='30 HSSN [5] CVPR ResNet101 480×480 ∼84 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='37 PRM [43] TMM ResNet101 473×473 120 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='86 TABLE 10: Quantitative MHP results on PASCAL-Person-Part test (§5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2) in terms of mIoU, APr vol and APr 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We only mark the best score in red color.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Year Method Pub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Pipeline Backbone #Epoch mIoU APr vol APr 50 2017 Holistic [138] BMVC 1S-TD ResNet101 100 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='34 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='40 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='60 2018 PGN [8] ECCV BU ResNet101 ∼80 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='40 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='20 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='60 2019 Parsing R-CNN [61] CVPR 1S-TD ResNet50 75 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='70 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='40 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='70 Unified [139] BMVC 1S-TD ResNet101 ∼600 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='10 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='10 2020 RP R-CNN [140] ECCV 1S-TD ResNet50 75 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='30 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='90 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='10 NAN [141] IJCV BU 80 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='20 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='70 2021 MGHR [59] CVPR BU ResNet101 150 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='90 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='00 score, which greatly surpassed other methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Table 9 shows the method results on the LIP benchmark since 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Overall, HIPN [125] and HSSN [5] achieve remarkable results in various metrics, in which HIPN scored 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='14% pixelAcc and HSSN scored 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='37% mIoU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2 MHP Performance Benchmarking We select 7 models experimented on PASCAL-Person-Part [160], 9 models experimented on CIHP [8] and 8 models experimented on MHP-v2 [162] to compare the performance of multiple human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 Evaluation Metrics Generally speaking, multiple human parsing uses mIoU to measure the semantic segmentation performance, and APr vol/APr 50 or APp vol/APp 50 to measure the performance of instance discrimination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Average precision based on region (APr vol/APr 50) [165] is similar to AP metrics in object detection [163].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' If the IoU between the predicted part and ground-truth part is higher than a certain threshold, the prediction is considered to be correct, and the mean Average Precision is calculated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The defined APr vol is the mean of the AP score for overlap thresholds varying from 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='9 in increments of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 and APr 50 is the AP score for threshold equals 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Average precision based on part (APp vol/APp 50) [141], [161] is adopted to evaluate the instance-level human parsing performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' APp is very similar to APr in calculation mode, except that it calculates mIoU with the whole human body.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' TABLE 11: Quantitative MHP results on CIHP val (§5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2) in terms of mIoU, APr vol and APr 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We only mark the best score in red color.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Year Method Pub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Pipeline Backbone #Epoch mIoU APr vol APr 50 2018 PGN [8] ECCV BU ResNet101 ∼80 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='80 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='60 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='80 2019 CE2P [44] AAAI 2S-TD ResNet101 150 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='50 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='80 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='70 Parsing R-CNN [61] CVPR 1S-TD ResNet50 75 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='30 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='50 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='90 BraidNet [57] MM 2S-TD ResNet101 150 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='62 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='59 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='99 Unified [139] BMVC 1S-TD ResNet101 ∼36 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='50 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='00 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='80 2020 RP R-CNN [140] ECCV 1S-TD ResNet50 150 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='20 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='30 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='20 SemaTree [41] ECCV 2S-TD ResNet101 200 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='87 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='96 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='27 SCHP [52] TPAMI 2S-TD ResNet101 150 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='47 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='74 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='94 2022 AIParsing [142] TIP 1S-TD ResNet101 75 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='70 TABLE 12: Quantitative MHP results on MHP-v2 val (§5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2) in terms of mIoU, APp vol and APp 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We only mark the best score in red color.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Year Method Pub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Pipeline Backbone #Epoch mIoU APp vol APp 50 2019 CE2P [44] AAAI 2S-TD ResNet101 150 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='11 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='70 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='47 Parsing R-CNN [61] CVPR 1S-TD ResNet50 75 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='20 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='50 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='50 2020 RP R-CNN [140] ECCV 1S-TD ResNet50 150 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='60 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='80 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='30 SemaTree [41] ECCV 2S-TD ResNet101 200 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='51 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='36 NAN [141] IJCV BU 80 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='78 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='14 SCHP [52] TPAMI 2S-TD ResNet101 150 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='21 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='25 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='10 2021 MGHR [59] CVPR BU ResNet101 150 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='40 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='30 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='00 2022 AIParsing [142] TIP 1S-TD ResNet101 75 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='10 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='60 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='20 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2 Results PASCAL-Person-Part benchmark is the classical benchmark in multiple human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Table 10 gathers the results of 7 models on PASCAL-Person-Part test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' PGN [8] is the top one in mIoU metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In APr vol/APr 50 metrics, MGHR [59], and NAN [141] are the best two methods at present.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The results on CIHP val set are summarized in Table 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' As seen, SCHP [52] performs the best on all metrics, which yields 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='67% mIoU, 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='74% APr vol, and 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='95% APr 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Table 12 summarizes 8 models on MHP-v2 val set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' SCHP achieves the best mIoU again.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In terms of APp vol/APp 50, RP R-CNN [140] has won the best results so far.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3 VHP Performance Benchmarking VIP datasets is widely used to benchmark video human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We selected 11 models since 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 Evaluation Metrics Similar to multiple human parsing, mIoU and APr vol are also adopted for video human parsing performance evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2 Results Table 13 gives the results of recent methods on VIP val set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' It is clear that LIIR [59] and UVC+ [155] have achieved the best performance in mIoU and APr vol metrics respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='4 Summary Through the above performance comparison, we can observe several apparent phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The first and most important is the fairness of the experimental setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' For single human parsing and multiple human parsing, many studies have not given detailed experimental settings, or there are great differences in several essential hyper-parameters, resulting fair comparison impossible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The second is that most methods do not give the parameters number and the inference time, which makes some methods occupy an advantage in comparison by increasing the model capacity, and also brings trouble to some computationally sensitive application scenarios, such as social media and automatic driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In addition to the above phenomena, we can also sum- marize some positive signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Firstly, in recent years, human IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 13 TABLE 13: Quantitative VHP results on VIP val (§5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2) in terms of mIoU and APr vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The three best scores are marked in red, blue, and green, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Year Method Pub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Backbone mIoU APr vol 2019 TimeCycle [146] CVPR ResNet50 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='9 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='6 UVC [147] NeurIPS ResNet18 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='7 2020 CRW [148] NeurIPS ResNet18 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='6 2021 ContrastCorr [149] AAAI ResNet18 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='4 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='6 CLTC [150] CVPR ResNet18 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='8 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 VFS [151] ICCV ResNet18 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='9 JSTG [152] ICCV ResNet18 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2 2022 LIIR [153] CVPR ResNet18 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 SCC [154] CVPR ResNet18 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='8 UVC+ [155] ArXiv ResNet18 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2 parsing research has shown an upward trend, especially from 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Secondly, although some studies have achieved high performance on LIP, CIHP and VIP, these benchmarks are still not saturated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Thus the community still needs to continue its efforts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Thirdly, some specific issues and hotspots of human parsing are gradually attracting people’s attention, which will further promote the progress of the whole field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 6 AN OUTLOOK: FUTURE OPPORTUNITIES OF HU- MAN PARSING After ten years of long development, with the whole community’s efforts, human parsing has made remarkable achievements, but it has also encountered a bottleneck.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In this section, we will discuss the opportunities of human parsing in the next era from multiple perspectives, hoping to promote progress in the field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 A Transformer-based Baseline for Human Parsing Although several mainstream benchmarks of human parsing have not been saturated, the accuracy growth has slowed down.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The reason for this, we believe, is that some advances in deep learning have not yet benefited the human parsing task (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', transformer [166]–[168], unsupervised representa- tion learning [158], [169]–[171]), and the lack of a concise and easily extensible code base for researchers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Therefore, the community urgently needs a new and strong baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We consider that a new human parsing baseline should have the following four characteristics: a) Universality, which can be applied to all mainstream human parsing tasks, including SHP, MHP, and VIP;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' b) Conciseness, the baseline method should not be too complex;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' c) Extensibility, complete code base, easy to modify or expand other modules or methods;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' d) High performance, state-of-the-arts or at least comparable performance can be achieved on the mainstream benchmarks under the fair experimental setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Based on the above views, we design a new transformer-based baseline for human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The proposed new baseline is based on the Mask2Former [172] architecture, with a few improve- ments adapted to human parsing, called Mask2Former for Parsing (M2FP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' M2FP can adapt to almost all human parsing tasks and yield amazing performances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Code and models are publicly available at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='com/ soeaver/M2FP 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 Mask2Former for Parsing Modeling Human as Group Queries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' To solve the three human parsing sub-tasks, we need to simultaneously model the parts relationship and distinguish human instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' DETR series work [168], [172]–[174] regard objects as queries, and transform object detection or instance segmentation task into a direct set prediction problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' A naive idea is to regard human parts as queries, then use mask classification to pre- dict the category and mask of each part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' However, this creates two problems that cannot be ignored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Firstly, only modeling parts will make it difficult to learn the global relationship between parts and humans;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Secondly, the subordination between part and human instance is unknown, resulting in the inadaptability for MHP task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Thus, we introduce the body hierarchy into the queries and use the powerful sequence encoding ability of transformer to build multiple hierarchical relationships between parts and humans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Specifically, we explicitly divide the queries into three groups: background queries, part queries and human queries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Through the relationship modeling ability of self-attention mechanism, besides the basic part-part relationship, the part-human, human-human, and part/human-background relationships are also modeled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Thanks to the direct modeling of parts and the introduction of multiple hierarchical granularities, M2FP can be applied to all supervised human parsing tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Architecture and Pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The architecture of proposed M2FP is illustrated in Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We try to make the smallest modification to the Mask2Former.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' An encoder is used to extract image or video features, which is composed of a backbone and a pixel decoder [173].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Then the features are flattened and sent into a transformer decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The transformer decoder consists of multiple repeated units, each containing a masked attention module, a self-attention module, and a shared feed-forward network (FFN) in turn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The grouped queries and flattened features conduct sufficient information exchange through the transformer decoder, and finally use bipartite matcher to match between queries and ground-truths uniquely.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' For SHP, in the inference stage, the background and part masks are combined with their class predictions to compute the final semantic segmentation prediction through matrix multiplication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' For MHP, the intersection ratio of semantic segmentation prediction and human masks is calculated to obtain the final instance-level human parsing prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' M2FP can also be extended to supervised VHP task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Follow [175], the background, parts, and humans in the video can be regarded as 3D spatial- temporal masks, and using the sequence encoding ability of transformer to make an end-to-end prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2 Experiments Experimental Setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We validate M2FP on several main- stream benchmarks, including LIP, PASCAL-Person-Part, CIHP, and MHP-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' All models are trained with nearly identical hyper-parameters under 8 NVIDIA V100 GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Specifically, we use AdamW [176] optimizer with a mini- batch size of 16, an initial learning rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0004 with poly (LIP) or step (PASCAL-Person-Part, CIHP, and MHP-v2) learning rate schedule, then train each model for 150 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Large scale jittering in the range of [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0] and typical data augmentation techniques, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', fixed size random crop (512×384 for LIP, 800×800 for PASCAL-Person-Part, CIHP, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 14 … … … … part queries human queries … … … … … … Encoder Transformer Decoder bkg queries bkg / part / human predictions grouping SHP MHP / VHP Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 6: Architecture of the proposed M2FP (§6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Through the explicit construction of background, part and human queries, we can model the relationship between humans and parts, and predict high-quality masks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' LIP (mIoU) 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 LIP (pixAcc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=') 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 PPP (mIoU) 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 PPP (APr) 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 CIHP (mIoU) 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 CIHP (APr) 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 MHP-v2 (mIoU) 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 MHP-v2 (APp) 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 Previous SOTA M2FP (Ours) 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3 (HSSN) 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='8 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='8 (RP R-CNN) 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='4 (PGN) 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='9 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='5 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 (HIPN) 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='9 (MGHR) 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='4 (SCHP) 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='4 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='1 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='7 (SCHP) 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2 (SCHP) 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='4 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='6 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7: Comparison of M2FP with previous human parsing state-of-the-art models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' M2FP achieves state-of-the-art (PPP, CIHP and MHP-v2) or comparable performance (LIP) on all human parsing sub-tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' and MHP-v2), random rotation from [-40◦, +40◦], random color jittering and horizontal flip, are also used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' For fair comparison, horizontal flipping is adopted during testing, and multi-scale test is used for LIP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The default backbone is ResNet-101 with pre-training on ImageNet-1K [177].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' As shown in Table 14 and Figure 7, M2FP achieves state-of-the-art or comparable performance across a broad range of human parsing benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' For SHP, M2FP only falls behind HIPN [125] and CDGNet [131], obtaining 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='93% pixAcc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' and 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='86% mIoU, showing great potential in the parts relationship modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' For MHP, M2FP shows amazing performance, greatly surpassing the existing methods on all metrics and even exceeding the state-of-the-art two- stage top-down method, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', SCHP [52].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Specifically, M2FP outperforms PGN [8] with 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='14 point mIoU and MGHR [59] with 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='56 point APr vol on PASCAL-Person-Part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' On the more challenging CIHP and MHP-v2, M2FP beats SCHP in terms TABLE 14: Overview of M2FP results on various human parsing benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' denotes the previous state-of-the-art results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Bold results denote M2FP achieve new state-of-the-art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' LIP PPP CIHP MHP-v2 Method pixAcc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' mIoU mIoU APr vol mIoU APr vol mIoU APp vol HIPN [125] 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='14 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='61 HSSN [5] 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='37 PGN [8] 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='40 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='20 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='80 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='60 MGHR [59] 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='90 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='40 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='30 SCHP [52] 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='47 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='74 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='21 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='25 RP R-CNN [140] 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='30 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='90 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='20 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='30 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='60 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='80 M2FP (ours) 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='93 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='86 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='54 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='46 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='15 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='47 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='64 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='36 of mIoU while running in an end-to-end manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Meanwhile, M2FP is also 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='96 points ahead of SCHP in APr vol (CIHP) and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='97 points ahead of RP R-CNN [140] in APp vol (MHP-v2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' These results demonstrate that M2FP surpasses almost all human parsing methods in a concise, effective and universal way, and can be regarded as a new baseline in the next era.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='2 Under-Investigated Open Issues Based on the reviewed research, we list several under- investigated open issues that we believe should be pursued.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Efficient Inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In practical applications, human pars- ing models generally need real-time or even faster inference speed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The current research has not paid enough attention to this issue, especially the multiple human parsing research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Although some literature [59], [142] has discussed the model efficiency, it can not achieve real-time inference, and there is no human parser designed for this purpose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Therefore, from the perspective of practical application, it is an under- investigated open issue to design an efficient inference human parsing model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Synthetic Dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' It is a common practice in many fields to use synthetic datasets to train models and transfer them to real scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Through CG technology (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', NVIDIA Omniverse2), we can obtain almost unlimited synthetic human data at a very low cost, as well as parsing annotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Considering the labeling cost of human parsing dataset, this is a very attractive scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wood et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' have made a preliminary attempt on the face parsing task and achieved very excellent performance [178], but at present, there is a lack of research on the human parsing field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' https://developer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='nvidia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='com/nvidia-omniverse IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 15 Long-tailed Phenomenon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The long-tailed distribution is the most common phenomenon in the real world, and also exists in the human parsing field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' For example, the Gini coefficient of MHP-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='0 is as high as 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='747 [179], exceeding some artificially created long-tailed datasets, but this problem is currently ignored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Therefore, the existing methods are often brittle once exposed to the real world, where they are unable to adapt and robustly deal with tail categories effectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' This calls for a more general human parsing model, with the ability to adapt to long-tailed distributions in the real world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='3 New Directions Considering some potential applications, we shed light on several possible research directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Video Instance-level Human Parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' The current VHP research basically follows an unsupervised semi-automatic video object segmentation setting, which reduces the labeling cost in a way that greatly loses accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' However, most of the practical requirements of video human parsing require extremely high precision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Therefore, making full use of annotations and solving the VHP issue through an instance- discriminative manner, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', a fine-grained video instance segmentation task, has great research prospects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Whole-body Human Parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Besides human parsing, face parsing and hand parsing [72], [73] are also important issues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' To fully understand the pixel-wise temporal-spatial attributes of human in the wild, it is necessary to parse body, face, and hands simultaneously, which implies a new direction to end- to-end parse the whole body: Whole-body Human Parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Natural hierarchical annotation and large-scale variation bring new challenges to existing parsing techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Thus the targeted datasets and whole-body parsers are necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Cooperation across Different Human-centric Directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Some human-centric visual tasks (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', human attribute recognition [180], pose estimation [181], human mesh re- construction [70]) face similar challenges to human parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Different tasks can play a positive role in promoting each other, although developments of these fields are independent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Moreover, the settings of different human-centric visual tasks are related, while there are no precedents for modeling these tasks in a unified framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Thus, we call for closer collaboration across different human-centric visual tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7 CONCLUSIONS As far as we know, this is the first survey to comprehensively review deep learning techniques in human parsing, covering three sub-tasks: SHP, MHP, and VHP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We first provided the readers with the necessary knowledge, including task settings, background concepts, relevant problems, and ap- plications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Afterward, we summarized the mainstream deep learning methods based on human parsing taxonomy, and analyzing them according to the theoretical background, technical contributions, and solving strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We also reviewed 14 popular human parsing datasets, benchmarking results on the 6 most widely-used ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' To promote sus- tainable community development, we discussed the under- investigated open issues and provided insight into new directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' We also put forward a new transformer-based human parsing framework, servicing a high-performance baseline for follow-up research through universal, concise, and extensible solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' In summary, we hope this survey to provide an effective way to understand the current state-of- the-art human parsing models and promote the sustainable development of this research field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' REFERENCES [1] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yamaguchi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kiapour, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ortiz, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Berg, “Parsing clothing in fashion photographs,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3570–3577.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 2, 5, 6, 7, 10, 11 [2] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gong, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shen, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, “Look into person: Joint body parsing pose estimation network and a new benchmark,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 41, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 871–885, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 5, 6, 7, 10, 12 [3] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Qi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Pang, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shao, “Learning compositional neural information fusion for human parsing,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5703–5713.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 5, 6, 7, 10, 11, 12 [4] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dai, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Pang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shen, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shao, “Hier- archical human parsing with typed part-relation reasoning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8929–8939.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 5, 6, 7, 11, 12 [5] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhou, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, “Deep hierarchical semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1246–1257.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 5, 12, 14 [6] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zuo, Human centric visual analysis with deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Singapore: Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [7] “Learning from limited or imperfect data (l2id) workshop,” https: //l2id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='io/challenge localization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='html, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 11 [8] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gong, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, “Instance- level human parsing via part grouping network,” in Proceedings of the European Conference on Computer Vision, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 770–785.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 7, 8, 10, 11, 12, 14 [9] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhou, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gong, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, “Adaptive temporal encoding network for video instance-level human parsing,” in Proceedings of the 26th ACM International Conference on Multimedia, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1527–1535.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 8, 11 [10] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Borras, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tous, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Llados, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Vanrell, “High-level clothes description based on colour-texture and structural features,” in Iberian Conference on Pattern Recognition and Image Analysis, 2003, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 108–116.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [11] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, “Composite templates for cloth modeling and sketching,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2006, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 943–950.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [12] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Guan, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Freifeld, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Black, “A 2d human body model dressed in eigen clothing,” in Proceedings of the European Conference on Computer Vision, 2010, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 285–298.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [13] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ramanan, “Articulated pose estimation with flexible mixtures-of-parts,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2011, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1385–1392.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [14] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dong, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xia, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Huang, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, “A deformable mixture parsing model with parselets,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3408–3415.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 2, 5, 6, 7, 10, 11 [15] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Caron, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Bojanowski, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Joulin, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Douze, “Deep cluster- ing for unsupervised learning of visual features,” in Proceedings of the European Conference on Computer Vision, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 139–156.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [16] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yuille, “Max margin and/or graph learning for parsing the human body,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2008, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1–8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [17] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dong, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, “Towards unified human parsing and pose estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 843–850.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 5, 6, 7 [18] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kae, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Sohn, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lee, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Learned-Miller, “Augmenting crfs with boltzmann machine shape priors for image labeling,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2019–2026.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [19] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ladicky, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Torr, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zisserman, “Human pose estimation using a joint pixel-wise and part-wise formulation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3578–3585.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 3 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 16 [20] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yamaguchi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hadi Kiapour, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Berg, “Paper doll parsing: Retrieving similar styles to parse clothing items,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3519–3526.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 5, 6, 7, 11 [21] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Bo and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Fowlkes, “Shape-based pedestrian parsing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2011, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2265–2272.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [22] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, “Deep human parsing with active template regression,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 37, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 12, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2402–2414, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 5, 6, 7, 10, 11 [23] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Fulkerson, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Vedaldi, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Soatto, “Class segmentation and object localization with superpixel neighborhoods,” in Proceedings of the IEEE International Conference on Computer Vision, 2009, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 670–677.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [24] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tighe and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lazebnik, “Superparsing: scalable nonparametric image parsing with superpixels,” in Proceedings of the European Conference on Computer Vision, 2010, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 352–365.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [25] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Feng, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Domokos, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Huang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, “Fashion parsing with weak color-category labels,” IEEE Transactions on Multimedia, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 16, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 253–265, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 2, 5, 6, 7, 10, 11 [26] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Krizhevsky, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Sutskever, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hinton, “Imagenet classifica- tion with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [27] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Girshick, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Donahue, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Darrell, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Malik, “Rich feature hierarchies for accurate object detection and semantic segmenta- tion,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 580–587.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [28] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jia, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shelhamer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Donahue, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Karayev, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Long, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Girshick, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Guadarrama, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Darrell, “Caffe: Convolutional architecture for fast feature embedding,” in Proceedings of the 22nd ACM international conference on Multimedia, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 675–678.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [29] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' LeCun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Bengio, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hinton, “Deep learning,” Nature, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 521, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7553, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 436–444, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [30] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Szegedy, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jia, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Sermanet, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Reed, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Anguelov, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Erhan, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Vanhoucke, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1–9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [31] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shelhamer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Long, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Darrell, “Fully convolutional networks for semantic segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 39, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 640–651, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 5, 11 [32] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' He, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ren, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 770–778.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [33] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xu, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yuille, “Attention to scale: Scale-aware semantic image segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3640–3649.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 4, 5, 7, 10 [34] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xiang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Feng, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, “Semantic object parsing with local-global long short-term memory,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3185–3193.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 4, 5, 7, 11 [35] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Song, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wu, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hu, “Attention inspiring receptive-fields network for learning invariant representations,” IEEE Transactions on Neural Networks and Learning Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 30, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1744–1755, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [36] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Cheng, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wei, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Huang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xiong, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Huang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hwu, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shi, “Spgnet: Semantic prediction guidance for scene parsing,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5218–5228.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 4, 5, 7 [37] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, “Human parsing with contextualized convolutional neural network,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1386–1394.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 2, 3, 5, 7, 10, 11 [38] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xia, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yuille, “Zoom better to see clearer: Human and object parsing with hierarchical auto-zoom net,” in Proceedings of the European Conference on Computer Vision, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 648–663.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 5, 7 [39] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tang, “Part-aware con- text network for human parsing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8971–8980.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 3, 5, 7, 12 [40] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Song, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xu, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, “Quality-aware network for human parsing,” arXiv preprint arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='05997, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [41] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ji, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Du, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhao, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Huang, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lyu, “Learning semantic neural tree for human parsing,” in Proceedings of the European Conference on Computer Vision, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 205–221.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 5, 6, 7, 8, 12 [42] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gong, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, “Graphon- omy: Universal human parsing via graph transfer learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7450–7459.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 5, 6, 7 [43] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lei, “Human parsing with part-aware relation modeling,” IEEE Transactions on Multimedia, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 5, 6, 7, 12 [44] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ruan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Huang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wei, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wei, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhao, “Devil in the details: Towards accurate single and multiple human parsing,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4814–4821.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 2, 5, 6, 7, 8, 10, 12 [45] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Su, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zheng, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xie, “Correlating edge, pose with parsing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8900–8909.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 5, 6, 7, 11, 12 [46] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, “Hybrid resolution network using edge guided region mutual information loss for human parsing,” in Proceedings of the 28th ACM International Conference on Multimedia, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1670–1678.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 5, 6, 7, 12 [47] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Nie, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Feng, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, “Mutual learning to adapt for joint human parsing and pose estimation,” in Proceedings of the European Conference on Computer Vision, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 502–517.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 5, 6, 7, 12 [48] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tian, “From pose to part: Weakly- supervised pose evolution for human part segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 3, 5, 6, 7 [49] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Sun, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ren, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Feng, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Han, “Cross-domain human parsing via adversarial feature and label adaptation,” in Proceedings of the AAAI Conference On Artificial Intelligence, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7146–7153.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 2, 5, 7 [50] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Luo, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zheng, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zheng, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Guan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yu, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, “Macro- micro adversarial network for human parsing,” in Proceedings of the European Conference on Computer Vision, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 418–434.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 5, 7, 12 [51] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gong, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shen, “Self-learning with rectification strategy for human parsing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 9263–9272.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 3, 5, 7, 12 [52] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wei, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, “Self-correction for human pars- ing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, 5, 7, 8, 11, 12, 14 [53] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Mameli, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Paolanti, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Pietrini, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Pazzaglia, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Frontoni, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zingaretti, “Deep learning approaches for fashion knowledge extraction from social media: a review,” IEEE Access, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [54] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Cheng, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Song, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hidayati, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, “Fashion meets computer vision: A survey,” ACM Computing Surveys, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 54, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1–41, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [55] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Khan, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Khan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ahmad, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ali, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kwak, “Face segmentation: A journey from classical to deep learning paradigm, approaches, trends, and directions,” IEEE Access, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 58 683– 58 699, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1 [56] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Minaee, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Boykov, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Porikli, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Plaza, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kehtarnavaz, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Terzopoulos, “Image segmentation using deep learning: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2, 5 [57] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Song, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Mei, “Braidnet: Braiding semantics and details for accurate human parsing,” in Proceedings of the 27th ACM International Conference on Multimedia, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 338–346.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2, 5, 7, 8, 12 [58] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhou, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Song, “Part decomposition and refinement network for human parsing,” IEEE/CAA Journal of Automatica Sinica, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 [59] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhou, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gool, “Differentiable multi-granularity human representation learning for instance- aware human semantic parsing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1622–1631.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3, 7, 8, 12, 14 [60] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lei, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Feng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, “Multi-initialization optimization network for accurate 3d human pose and shape estimation,” in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1976–1984.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 17 [61] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Song, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jiang, “Parsing r-cnn for instance-level human analysis,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 364–373.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3, 7, 8, 12 [62] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' de Geus, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Meletis, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wen, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dubbelman, “Part- aware panoptic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5485–5494.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 [63] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhou, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Porikli, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Crandall, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gool, “A survey on deep learning technique for video segmentation,” arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='01153, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 [64] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Fang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Fang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xie, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tai, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lu, “Weakly and semi supervised human body part parsing via pose-guided knowledge transfer,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 70–78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3, 5 [65] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' He, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Thuraisingham, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tao, “Progressive one-shot human parsing,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1522–1530.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3, 5 [66] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' He, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhuang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Cai, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tao, “End-to-end one-shot human parsing,” arXiv preprint arXiv:2105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='01241, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 [67] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gao, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Feng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wei, “Clicking matters: Towards interactive human parsing,” IEEE Transactions on Multimedia, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 [68] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ge, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gai, “Semantic human matting,” in Proceedings of the 26th ACM International Conference on Multimedia, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 618–626.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 [69] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hou, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Cui, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xie, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hua, “Boosting semantic human matting with coarse annotations,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8563–8572.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 [70] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Guler and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kokkinos, “Holopose: Holistic 3d human recon- struction in-the-wild,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 10 884–10 894.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3, 15 [71] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zheng, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wei, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dai, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, “Deephuman: 3d human reconstruction from a single image,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7739–7749.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 [72] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yuan, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Thalmann, “Parsing the hand in depth images,” IEEE Transactions on Multimedia, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 16, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1241– 1253, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3, 15 [73] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zeng, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wen, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yuan, “Face parsing with roi tanh-warping,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5654–5663.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3, 15 [74] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Guler, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Neverova, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kokkinos, “Densepose: Dense human pose estimation in the wild,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7297–7306.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3, 10, 11 [75] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Karlsson, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Bregler, “Simpose: Effectively learning densepose and surface normals of people from simulated data,” in Proceedings of the European Conference on Computer Vision, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 225–242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 [76] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kalayeh, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Basaran, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gokmen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kamasak, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shah, “Human semantic parsing for person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1062–1071.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3, 12 [77] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Huang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Huang, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, “Towards rich feature discovery with class activation maps augmentation for person re-identification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1389–1398.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 [78] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Sun, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zheng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tian, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, “Learning part-based convolutional features for person re-identification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 43, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 902–917, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 [79] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Huang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Huang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Huang, “Improve person re-identification with part awareness learning,” 2, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 29, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7468–7481, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 [80] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lv, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yuan, “Person re-identification with part prediction alignment,” Computer Vision and Image Understanding, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 205, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 [81] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tian, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, “Eliminating background-bias for robust person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5794–5803.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 [82] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gong, “Instance-guided context rendering for cross-domain person re-identification,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 232–242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 [83] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Qiao, “Cocas: A large-scale clothes changing person dataset for re-identification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3400–3409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 [84] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Qian, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Fu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jiang, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xue, “Long-term cloth-changing person re-identification,” in Proceedings of the Asian Conference on Computer Vision, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 71–88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3 [85] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Han, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yu, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Davis, “Viton: An image- based virtual try-on network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7543–7552.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [86] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zheng, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, “Toward characteristic-preserving image-based virtual try-on network,” in Proceedings of the European Conference on Computer Vision, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 589–604.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [87] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xie, “Vtnfp: An image-based virtual try- on network with body and clothing feature preservation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 10 511–10 520.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [88] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tao, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Cai, “M2e-try on net: Fashion from model to everyone,” in Proceedings of the 27th ACM International Conference on Multimedia, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 293–301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [89] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dong, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lai, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hu, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yin, “Towards multi-pose guided virtual try-on network,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 9026–9035.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [90] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Song, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tong, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tang, “Toward realistic virtual try-on through landmark-guided shape matching,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2118–2126.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [91] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xie, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dong, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kampffmeyer, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, “Was-vton: Warping architecture search for virtual try-on network,” in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3350–3359.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [92] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xie, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kampffmeyer, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dong, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Han, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zheng, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, “M3d-vton: A monocular-to-3d virtual try- on network,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 13 239–13 249.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [93] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Issenhuth, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Mary, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Calauzenes, “Do not mask what you do not need to mask: a parser-free virtual try-on,” in Proceedings of the European Conference on Computer Vision, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 619–635.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [94] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Peng, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' He, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jiang, “Pf-vton: Toward high-quality parser-free virtual try-on network,” in International Conference on Multimedia Modeling, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 28–40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [95] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhou, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Luo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Huang, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' He, “Rmgn: A regional mask guided network for parser- free virtual try-on,” arXiv preprint arXiv:2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='11258, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [96] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Goodfellow, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Pouget-Abadie, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Mirza, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Warde-Farley, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ozair, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Courville, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4, 7 [97] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Karras, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Laine, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Aila, “A style-based generator archi- tecture for generative adversarial networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4401–4410.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [98] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Niemeyer and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Geiger, “Giraffe: Representing scenes as compositional generative neural feature fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 11 453–11 464.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [99] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Nichol, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dhariwal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ramesh, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shyam, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Mishkin, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Mc- Grew, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Sutskever, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, “Glide: Towards photorealistic image generation and editing with text-guided diffusion models,” arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='10741, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [100] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xie, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xiao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dong, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, “Image comes dancing with collaborative parsing-flow video synthesis,” IEEE Transactions on Image Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 30, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 9259–9269, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [101] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Fruhstuck, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Singh, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shechtman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Mitra, Niloy, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wonka, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lu, “Insetgan for full-body image generation,” arXiv preprint arXiv:2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='07293, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [102] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ni, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ge, “Simswap: An efficient framework for high fidelity face swapping,” in Proceedings of IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 18 the 28th ACM International Conference on Multimedia, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2003–2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [103] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Song, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wu, “Attacks on state-of-the-art face recognition using attentional adversarial attack generative net- work,” Multimedia Tools and Applications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 80, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 855–875, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [104] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lew, “Swapgan: A multistage generative approach for person-to-person fashion style transfer,” IEEE Transactions on Multimedia, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 21, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2209–2222, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [105] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Huo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jin, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lai, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shi, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gao, “Manifold alignment for semantically aligned style transfer,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 14 861–14 869.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [106] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ma, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' He, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ding, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gao, “Dual-affinity style embedding network for semantic-aligned image style transfer,” IEEE Transactions on Neural Networks and Learning Systems, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [107] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kim, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kim, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lee, “Style-controlled synthesis of clothing segments for fashion image manipulation,” IEEE Transactions on Multimedia, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 22, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 298–310, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [108] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ntavelis, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Romero, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kastanis, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gool, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Timofte, “Sesame: Semantic editing of scenes by adding, manipulating or erasing objects,” in Proceedings of the European Conference on Computer Vision, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 394–411.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [109] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tseng, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Fisher, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kim, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, “Modeling artistic workflows for image generation and editing,” in Proceedings of the European Conference on Computer Vision, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 158–174.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [110] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xu, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, “Matching-cnn meets knn: Quasi-parametric human parsing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1419–1427.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, 11 [111] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Cao, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, “Fashion parsing with video context,” IEEE Transactions on Multimedia, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 17, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1347–1358, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, 6, 7 [112] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xia, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yuille, “Pose-guided human pars- ing by an and/or graph using pose-context features,” Proceedings of the AAAI Conference on Artificial Intelligence, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3632–3640, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5 [113] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Feng, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, “Semantic object parsing with graph lstm,” in Proceedings of the European Conference on Computer Vision, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 125–143.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4, 5, 7, 11 [114] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Luo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Huang, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, “Clothes co- parsing via joint image segmentation and labeling with application to clothing retrieval,” IEEE Transactions on Multimedia, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 18, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1175–1186, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, 6, 7, 10, 11 [115] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Feng, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xing, “Interpretable structure-evolving lstm,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1010–1019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4, 5, 7, 11 [116] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gong, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shen, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, “Look into person: Self-supervised structure-sensitive learning and a new benchmark for human parsing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 932–940.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, 6, 7, 10, 11, 12 [117] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xia, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yuille, “Joint multi-person pose estimation and semantic part segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 6769–6778.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5 [118] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, “Progressive cognitive human parsing,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7607–7614.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, 6, 7 [119] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Luo, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Su, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Guo, “Trusted guidance pyramid network for human parsing,” in Proceedings of the 26th ACM International Conference on Multimedia, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 654–662.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, 7, 11 [120] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tian, “Multi-class part parsing with joint boundary-semantic awareness,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 9177–9186.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, 6, 7 [121] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' He, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tao, “Grapy-ml: Graph pyramid mutual learning for cross-dataset human parsing,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 10 949–10 956.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4, 5, 7 [122] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yuan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, “Object-contextual representa- tions for semantic segmentation,” in Proceedings of the European Conference on Computer Vision, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 173–190.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, 12 [123] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tang, “Blended grammar network for human parsing,” in Proceedings of the European Conference on Computer Vision, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 189–205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, 6, 7, 12 [124] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Sun, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Cheng, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jiang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Deng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Mu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xiao, “Deep high-resolution representation learning for visual recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 43, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3349–3364, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, 12 [125] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yuen, “Hierarchical information passing based noise-tolerant hybrid learning for semi-supervised human parsing,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2207–2215.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, 7, 12, 14 [126] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jin, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gong, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shao, “Mining contextual information beyond image for semantic seg- mentation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7231–7241.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, 12 [127] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jin, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chu, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yu, “Isnet: Integrate image-level and semantic-level context for semantic segmentation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7189–7198.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, 12 [128] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zeng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Huang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Bao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Su, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, “Neural architecture search for joint human parsing and pose estimation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 11 385–11 394.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, 6, 7, 12 [129] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Su, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zheng, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xie, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, “On the correlation among edge, pose and parsing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, 6, 7, 12 [130] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhou, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Qi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shen, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, “Hierarchical human semantic parsing with comprehensive part-relation model- ing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, 6, 7 [131] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Choi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hwang, “Cdgnet: Class distri- bution guided network for human parsing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4473–4482.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5, 7, 11, 12, 14 [132] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hochreiter and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Schmidhuber, “Long short-term memory,” Neural Computation, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 9, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1735—1780, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4 [133] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shi, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Qi, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jia, “Pyramid scene parsing network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2881–2890.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5 [134] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Papandreou, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kokkinos, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Murphy, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yuille, “Deeplab: Semantic image segmentation with deep convo- lutional nets, atrous convolution, and fully connected crfs,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 40, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 834–848, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5 [135] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dollar, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Girshick, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' He, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hariharan, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Belongie, “Feature pyramid networks for object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2117–2125.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5 [136] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kirillov, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Girshick, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' He, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dollar, “Panoptic feature pyramid networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 6399–6408.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 5 [137] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Fang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Sun, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, “Densely connected search space for more flexible neural architecture search,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 10 628–10 637.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 6 [138] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Arnab, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Torr, “Holistic, instance-level human parsing,” in British Machine Vision Conference, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7, 8, 12 [139] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Qin, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hong, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hung, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tsai, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, “A top-down unified framework for instance-level human parsing,” in British Machine Vision Conference, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7, 8, 12 [140] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Song, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xin, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jia, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xu, “Renovating parsing r-cnn for accurate multiple human parsing,” in Proceedings of the European Conference on Computer Vision, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 421–437.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7, 8, 12, 14 [141] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Feng, “Fine-grained multi- human parsing,” International Journal of Computer Vision, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 128, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2185–2203, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7, 8, 12 [142] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Cao, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Qi, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Song, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhou, “Aiparsing: Anchor-free instance-level human parsing,” IEEE Transactions on Image Processing, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7, 8, 12, 14 [143] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kiefel and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gehler, “Human pose estimation with fields of parts,” in Proceedings of the European Conference on Computer Vision, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 331—346.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 19 [144] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' He, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gkioxari, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dollar, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Girshick, “Mask r-cnn,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2961–2969.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 7 [145] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tian, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' He, “Fcos: A simple and strong anchor-free object detector,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 44, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1922–1933, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8 [146] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jabri, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Efros, “Learning correspondence from the cycle-consistency of time,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2566–2576.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8, 13 [147] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Mello, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kautz, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, “Joint-task self-supervised learning for temporal correspondence,” in Advances in Neural Information Processing Systems, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 318–328.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8, 13 [148] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jabri, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Owens, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Efros, “Space-time correspon- dence as a contrastive random walk,” in Advances in Neural Information Processing Systems, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 19 545–19 560.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8, 13 [149] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhou, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, “Contrastive transformation for self-supervised correspondence learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 10 174–10 182.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8, 13 [150] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jeon, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Min, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kim, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Sohn, “Mining better samples for contrastive learning of temporal correspondence,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1034–1044.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8, 9, 13 [151] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xu and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, “Rethinking self-supervised correspondence learning: A video frame-level similarity perspective,” in Proceed- ings of the IEEE/CVF International Conference on Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 10 075–10 085.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8, 9, 13 [152] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jin, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Heng, “Modelling neighbor relation in joint space-time graph for video correspondence learning,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 9960–9969.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8, 9, 13 [153] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhou, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, “Locality- aware inter-and intra-video reconstruction for self-supervised correspondence learning,” in Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8, 13 [154] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Son, “Contrastive learning for space-time correspondence via self-cycle consistency,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 14 679–14 688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8, 9, 13 [155] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Mckee, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhan, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shuai, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Modolo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tighe, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lazebnik, “Transfer of representations to video label propagation: implemen- tation factors matter,” arXiv preprint arXiv:2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='05553.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=', 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8, 12, 13 [156] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Vondrick, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shrivastava, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Fathi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Guadarrama, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Mur- phy, “Tracking emerges by colorizing videos,” in Proceedings of the European Conference on Computer Vision, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 391–408.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8 [157] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhong, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Mello, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gu, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jampani, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kautz, “Switchable temporal propagation network,” in Proceedings of the European Conference on Computer Vision, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 87–102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 8 [158] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' He, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Fan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xie, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 9729–9738.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 9, 13 [159] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Luo, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Tang, “Pedestrian parsing via deep decompositional network,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2648–2655.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 10, 11 [160] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Mottaghi, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Fidler, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Urtasun, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yuille, “Detect what you can: Detecting and representing objects using holistic models and body parts,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 1971–1978.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 10, 11, 12 [161] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wei, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Sim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Feng, “Multiple-human parsing in the wild,” arXiv preprint arXiv:1705.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='07206, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 10, 11, 12 [162] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Cheng, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Sim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yan, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Feng, “Understanding humans in crowded scenes: Deep nested adversarial learning and a new benchmark for multi-human parsing,” in Proceedings of the 26th ACM International Conference on Multimedia, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 792–800.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 10, 11, 12 [163] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Maire, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Belongie, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hays, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Perona, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ramanan, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Doll´ar, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zitnick, “Microsoft coco: Common objects in context,” in Proceedings of the European Conference on Computer Vision, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 740–755.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 10, 12 [164] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Everingham, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Van Gool, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Williams, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Winn, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zisserman, “The pascal visual object classes (voc) challenge,” International Journal of Computer Vision, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 88, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 303–338, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 10 [165] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hariharan, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Arbelaez, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Girshick, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Malik, “Simultane- ous detection and segmentation,” in Proceedings of the European Conference on Computer Vision, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 297–312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 12 [166] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Vaswani, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shazeer, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Parmar, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Uszkoreit, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jones, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gomez, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kaiser, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 6000–6010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 13 [167] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dosovitskiy, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Beyer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kolesnikov, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Weissenborn, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhai, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Unterthiner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dehghani, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Minderer, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Heigold, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Gelly, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Uszkoreit, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” in Proceedings of the International Conference on Learning Representations, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 13 [168] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Carion, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Massa, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Synnaeve, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Usunier, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kirillov, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zagoruyko, “End-to-end object detection with transformers,” in Proceedings of the European Conference on Computer Vision, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 213–229.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 13 [169] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Devlin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lee, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Toutanova, “Bert: Pre- training of deep bidirectional transformers for language under- standing,” in Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 4171–4186.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 13 [170] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Bao, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dong, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Piao, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wei, “Beit: Bert pre-training of image transformers,” in Proceedings of the International Conference on Learning Representations, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 13 [171] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' He, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Xie, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dollar, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Girshick, “Masked autoencoders are scalable vision learners,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 13 [172] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Cheng, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Misra, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Schwing, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kirillov, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Girdhar, “Masked-attention mask transformer for universal image segmen- tation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 13 [173] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Su, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Lu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dai, “Deformable detr: Deformable transformers for end-to-end object detection,” in Proceedings of the International Conference on Learning Representations, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 13 [174] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Cheng, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Schwing, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kirillov, “Per-pixel classification is not all you need for semantic segmentation,” in Advances in Neural Information Processing Systems, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 17 864–17 875.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 13 [175] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Cheng, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Choudhuri, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Misra, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kirillov, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Girdhar, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Schwing, “Mask2former for video instance segmentation,” arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='10764, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 13 [176] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Loshchilov and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hutter, “Decoupled weight decay regular- ization,” in Proceedings of the International Conference on Learning Representations, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 13 [177] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Russakovsky, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Deng, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Su, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Krause, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Satheesh, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Ma, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Huang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Karpathy, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Khosla, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Bernstein, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Berg, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Li, “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 115, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 211–252, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 14 [178] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wood, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Baltrusaitis, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hewitt, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Dziadzio, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Johnson, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Estellers, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Cashman, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shotton, “Fake it till you make it: Face analysis in the wild using synthetic data alone,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 3681–3691.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 14 [179] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Jiang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Song, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Guo, “A survey on long-tailed visual recognition,” International Journal of Computer Vision, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 15 [180] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Song, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Hu, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, “Hier r-cnn: Instance-level human parts detection and a new benchmark,” IEEE Transactions on Image Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 30, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 39–54, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 15 [181] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zheng, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Wu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Yang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Zhu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Chen, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shen, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Kehtarnavaz, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' Shah, “Deep learning-based human pose estimation: A survey,” arXiv preprint arXiv:2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content='13392, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} +page_content=' 15' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Y9AyT4oBgHgl3EQfifgI/content/2301.00394v1.pdf'} diff --git a/YtE1T4oBgHgl3EQfcQR-/vector_store/index.faiss b/YtE1T4oBgHgl3EQfcQR-/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..a687df2e95f091c548ff31e11797fb62243f565e --- /dev/null +++ b/YtE1T4oBgHgl3EQfcQR-/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca6b61297c212161848f9d7cc91a477bb770ec73765bcb85ebe454281d400521 +size 9699373 diff --git a/Z9AzT4oBgHgl3EQfm_3D/content/tmp_files/2301.01574v1.pdf.txt b/Z9AzT4oBgHgl3EQfm_3D/content/tmp_files/2301.01574v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..e9d22ef22f5b6d3c2b20673cc38dd0f1d4be6aa0 --- /dev/null +++ b/Z9AzT4oBgHgl3EQfm_3D/content/tmp_files/2301.01574v1.pdf.txt @@ -0,0 +1,3194 @@ +arXiv:2301.01574v1 [math.AP] 4 Jan 2023 +POSITIVE JACOBIAN CONSTRAINTS FOR ELLIPTIC BOUNDARY VALUE +PROBLEMS WITH PIECEWISE-REGULAR COEFFICIENTS ARISING FROM +MULTI-WAVE INVERSE PROBLEMS +YVES CAPDEBOSCQ AND TIANRUI DAI +Abstract. Multi-wave inverse problems are indirect imaging methods using the interaction of +two different imaging modalities. One brings spatial accuracy, and the other contrast sensitivity. +The inversion method typically involve two steps. The first step is devoted to accessing internal +datum of quantities related to the unknown parameters being observed. The second step involves +recovering the parameters themselves from the internal data. To perform that inversion, a typical +requirement is that the Jacobian of fields involved does not vanish. A number of authors have +considered this problem in the past two decades, and a variety of methods have been developed. +Existing techniques require Hölder continuity of the parameters to be reconstructed. In practical +applications, the medium may present embedded elements, with distinct physical properties, +leading to discontinuous coefficients. In this article we explain how a Jacobian constraint can +imposed in the piecewise regular case, when the physical model is a divergence form second +order linear elliptic boundary value problem. +1. Introduction +Parameter reconstruction problems for elliptic boundary value problems are indirect recon- +struction problems with, at best, logarithmic stability [Ale88, Man01]. +While these type of +measurements are desirable as they are non intrusive, and typically require low cost apparels, +such weak stability implies that only low-resolution reconstruction can be achieved in prac- +tice [Sch15, WA11, WS12]. The Calderón problem for electrical impedance tomography (EIT) +[Cal80, Uhl09, Uhl14], the inverse scattering problems [CK98] and optical tomography [Arr99] are +the main examples of such problems. The stability of such methods dramatically improve when, +instead of making absolute measurements ex nihilo, they are used to estimate perturbations of a +know medium [AK04, CV03]. On the other hand, fast wave imaging modalities, such as ultrasound +tomography or MRI, preserve singularities and achieve excellent spatial accuracy, at the cost of a +loss of quantitative information with respect to the amplitude of the parameters involved. +Over the past two decades, coupled-physics, or multi-wave, or hybrid inverse problems (the +final commonly accepted name is yet to be determined) have emerged. These imaging modalities +aim to benefit from the advantages of both approaches: one for accurate contrast estimations, +and the other for high resolution [AC18, AGK+17, Bal13, Kuc12]. The most developed hybrid +modality is photo-acoustic tomography (PAT) [DDBR19, KK11, RRN09, WA11], in which light +and ultrasounds are combined. +Many other modalities have been considered, all combining a +diffusive process with a much less diffusive one [AKKR18, ABC+08, ACdG+11, HMY+04, KK11, +LJC00, SKL+12, SW11, WA11, ZW04]. +The parameter reconstruction method in all these problems start with a data collection step, +where some internal data is reconstructed, involving both the parameter of interest and the solution +of the PDE involving this parameter. In PAT, the internal data is µ(x)u(x) , where µ is the optical +absorption and u is the light intensity. In Current Density Impedance Imaging (CDII), the internal +data is |γ(x)∇u(x)|, where γ is the conductivity and u is the electric field. The second step involves +2000 Mathematics Subject Classification. [2010]{35J25, 35B38, 35R30}. +Key words and phrases. Hybrid inverse problems, coupled-physics imaging, photo-acoustic tomography, non- +zero constraints, Runge approximation, elliptic equations, Whitney reduction, unique continuation. +This study contributes to the IdEx Université de Paris ANR-18-IDEX-0001. +1 + +2 +YVES CAPDEBOSCQ AND TIANRUI DAI +extracting the parameter from this data, (µ(x) in PAT, γ in CDII). The mathematical problem +considered in this article is related to this second step. +Example (A Jacobian constraint example). Consider the problem of reconstructing γ, a scalar +function, in +− div(γDu) = 0 +in Ω, +from the knowledge of the potential u in Ω, as in [Ale86]. It appears in a variety of contexts, +such has Hydrology [NY79], CDII [BGM14, NTT11, SJAH91, WLM94] and Acousto–Electric +Tomography [AKKR18, ABC+08, CFdGK09]. If γ is regular, we have +(1.1) +D(lnγ) · Du = −∆u +in Ω. +Suppose given d measurements u1, . . . , ud. By (1.1) we obtain +(D(lnγ))T �Du1, +· · · +, Dud +� += − +�∆u1, +· · · +, ∆ud +� +in Ω. +If det +� +Du1, +· · · +, Dud +� +> 0 holds true, then ∇(lnγ), and in turn γ up to a multiplicative +constant, are explicitly readable from the data by inverting the matrix +�Du1, +· · · +, Dud +� +. More +generally, given N ≥ d measurements, the least-square optimisation problem associated to the +(possibly overdetermined) system of equations +(D(lnγ))T � +Du1, +· · · +, DuN +� += − +� +∆u1, +· · · +, ∆uN +� +in Ω. +has a unique minimiser when det +� +Duii, +· · · +, Duid +� +> 0 for some (i1, . . . , id) ∈ {1, . . . , N}d. +Using unique continuation methods, it is possible to address the parameter reconstruction +problem without imposing Jacobian constraints [Ale14, ADCFV17, BCT22, Cho21, CT19]. On +the other hand, when non-vanishing constraints are satisfied, the stability estimates are optimal +(of Lipschitz type) and often lead to explicit reconstruction formulae [AC18, Bal13]. +The focus of this paper is non-vanishing Jacobian constraints. In two dimensions, for the con- +ductivity equation, a generalisation of the Radó–Kneser–Choquet theorem [Ale86, AN01, AN15, +BMN01] shows that imposing a non-vanishing Jacobian constraint globally, and independently of +the conductivity is possible : in practice, it suffices to verify that the Jacobian doesn’t vanish +when the conductivity is equal to one everywhere. Such an approach cannot be extended to three +dimensions, or more general elliptic problems [Cap15, Woo91]. Suitable solutions can be construc- +ted using complex geometrical optics solutions (CGOS) [Bal13, BBMT13, BR11, BU10, BU13], +but this construction depends on the unknown coefficients, which must be smooth and isotropic. +Another approach [AC18, BU13] is based on the Runge approximation [Lax56, Mal56]. It is valid +for all PDE for which a Unique Continuation Property holds, it allows for anisotropic coefficients, +and the smoothness assumptions are precisely that for the Unique Continuation Property of the +underlying equation, namely Lipschitz regularity or Hölder continuity depending on the equation. +By combining this approach with the Whitney projection method, it is proved in [AC22] that +the set of suitable solutions is open and dense, with explicit estimates on the number of solutions +needed. A very related result, using a slightly different Whitney projection argument, was proved +independently around the same time [CLR20]. Very recently, another approach was proposed, +which showed that choosing random boundary values was possible [Alb22]. +All these methods rely on some regularity of the coefficients. In practical cases, it is desirable to +consider the case of piecewise regular coefficients, each region corresponding to a different strata +in geology, or a different organ in medical imaging. +In this work, we show how the approach introduced in [AC22] can be extended to the case +of piecewise regular coefficients. We use existing unique continuation results within the regular +parts of the domain, [Bro62, Lax56, Mal56], and introduce adequate quantities to cross over +discontinuities. These constructions may prove useful for other models where the principal part is +in divergence form. +In section §2 we detail our assumptions, state the main result of this article, and explain its +proof, using intermediate results proved in the subsequent sections. In [AC22], Hölder continuity is +crucially used in two instances : to show existence of solutions satisfying the adequate constraints +via Runge Approximation, and to use the Whitney projection method, which is based on Sard’s + +JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS +3 +lemma, which itself uses Hölder continuity. As a result, our developments come in two parts. +In section §3 we establish the existence a finite number of solutions such that the non vanishing +Jacobian constraint is satisfied in the whole domain : this requires adapting existing unique +continuation results to cross smooth interfaces. In section §4 we use the continuity of fluxes across +interfaces resulting from the divergence structure of the principal part, via appropriate charting, +to deduce non-vanishing properties of gradients up to the internal subregion boundaries. +2. Model, Assumptions and Main Results +2.1. Problem definition. The ambient space is Rd, with d ≥ 2. +Assumption 1. Assume that Ω is an open, bounded and connected domain in Rd with a C2 +boundary. +Assume that Ω contains N ≥ 1 open connected disjoint sets Ω1, . . . , ΩN with C2 boundaries +such that 0 < d +� +∪N +ℓ=1Ωi, Rd \ Ω +� +. +Assume furthermore that for any i ∈ {1, . . . , N} , Ωi has a C2 � +Rd−1� +boundary, and each +connected component of its boundary is in common with at most one other Ωj, j ̸= i. +We write ΩN+1 = Ω \ +� +∪N +i=1Ωi +� +, and denote Γij = ∂Ωi ∩ ∂Ωj when this set is non-empty. +Additionally, assume that each Γij is sphere-like, that is, there exists an open neighbourhood +Uij of Γij, an open neighbourhood Vij of Sd−1, and a C2 diffeomorphism ψij : Uij → Vij such that +ψij (Γij) = Sd−1. +Moreover, there exists d0 > 0 such that +(2.1) +∀i, j ∈ {1, . . . , N + 1}2 , i ̸= j, if ∂Ωi \ Γij ̸= ∅ then d (Γij, ∂Ωi \ Γij) > d0. +An example of such a configuration is given in figure 3.1. +Following the usual notation, given a set U we write +1U : x → +� +1 +if x ∈ U, +0 +otherwise. +Assumption 2. Given α ∈ (0, 1], for each i ∈ {1, · · · , N + 1} let Ai ∈ C0,α � +Rd; Ms +d (R) +� +be a +symmetric-matrix-valued function which is uniformly elliptic, that is, there exists λ > 0 such that +for all x ∈ Ω, and all ζ ∈ Rd, +(2.2) +λ |ζ|2 < A (x) ζ · ζ. +For each i ∈ {1, · · · , N + 1}, let bi ∈ C0,α � +Rd; Rd� +, ci ∈ C0,α � +Rd; Rd� +and qi ∈ C0,α � +Rd; R +� +, be +such that +max +� +∥Ai∥ +C0,α(Rd;Rd×d), ∥bi∥ +C0,α(Rd;Rd) , ∥ci∥ +C0,α(Rd;Rd) , ∥qi∥ +C0,α(Rd;R) +� +≤ λ−1, +where for n ≥ 1, +∥f∥ +C0,α(Rd;Rn) = sup +Rd |f| + +sup +x̸=y∈Rd +0̸=ζ∈Rn +|f (x) · ζ − f (y) · ζ| +|x − y| |ζ| +. +Finally, when d ≥ 3, we assume additionally that Ai ∈ C0,1 � +Rd; Ms +d (R) +�1. +We write, for all x ∈ Ω \ ∪i,jΓij, +(2.3) +A = +N+1 +� +i=1 +Ai1Ωi, +b = +N+1 +� +i=1 +bi1Ωi, +c = +N+1 +� +i=1 +ci1Ωi, and q = +N+1 +� +i=1 +qi1Ωi. +We consider a second order elliptic operator of the form L : u → − div (ADu + bu)+c·Du+qu, +and the PDE under consideration is +(2.4) +Lu = 0 +in Ω. +1So that the Unique Continuation Property holds in each subdomain. This assumption can be relaxed when +� +Akℓ +i (x) +� +1≤k,ℓ≤d = (ai(x)δkℓ)1≤k,ℓ≤d for all x. + +4 +YVES CAPDEBOSCQ AND TIANRUI DAI +Thanks to assumption 2 the weak solutions of equation (2.4) enjoy additional regularity within +each subdomain Ωi, i = 1, · · · , N + 1. lemma 3 follows from classical regularity results, see e.g. +[Gia93, Theorem 5.19 and 5.20] for a modern exposition. +Lemma 3. If u ∈ H1 (Ω) is a weak solution of equation (2.4), such that there exists g ∈ C1,α � +Ω +� +such that u − g ∈ H1 +0 (Ω) and for any v ∈ H1 +0 (Ω) there holds +� +Ω +ADu · Dv dx + +� +Ω +ub · Dv dx + +� +Ω +vc · Du dx + +� +Ω +quv dx = 0. +Then, +(2.5) +u ∈ H1 (Ω) ∩ +� +∪N+1 +i=1 C1,α (Ωi) +� +=: H (Ω) , +and +N+1 +� +i=1 +� +∥Du∥C0,α(Ωi) + ∥u∥C0,α(Ωi) +� +≤ C +� +∥u∥L2(Ω) + ∥g∥C1,α(Ω) +� +, +where the constant C depends on λ given in assumption 2 and Ωi, i = {1, · · · , N + 1} only. +We are now in position to define the quantity of interest in this paper. +Definition 4 (Non-vanishing Jacobian solutions). Given P ≥ d + 1, we call {ux0 +i }P +i=1 ∈ H (Ω)P +(a group of) non-vanishing Jacobian solutions of equation (2.4) at x0 ∈ Ω \ ∪i,jΓij, if +(1) for i = 1, . . . , P there holds Lux0 +i += 0 in Ω, +(2) The solutions {ux0 +i }P +i=1 ∈ H (Ω)P satisfy rank (J (ux0 +1 , · · · , ux0 +P ) (x0)) = d + 1, where +J (u1, . . . , uP ) (x) := + + + +Du1 +... +DuP +u1 +... +uP + + + (x) = + + + +∂1u1 +. . . +∂du1 +... +... +... +∂1uP +. . . +∂duP +u1 +... +uP + + + (x) . +Remark 5. Thanks to lemma 3, pointwise values of J (ux0 +1 , · · · , ux0 +P ) are well defined at any x ∈ +Ω \ ∪i,jΓij. The use of the word ‘Jacobian’ for the quantity J may seem abusive. Indeed one +would expect a Jacobian to be + + + +Dv1 +... +Dvd + + + = + + + +∂1v1 +. . . +∂dv1 +... +... +... +∂1vd +. . . +∂dvd + + + , +for some function v1, · · · , vd. It turns out that the slightly generalised Jacobian we consider is +a natural quantity to consider in this problem, to take into account the behaviour of solution +across interfaces. On the other hand, from a family of non-vanishing Jacobian solutions, one can +extract a subfamily +� +ux0 +i1 , · · · , ux0 +id +� +such that det +� +Dux0 +i1 , · · · , Dux0 +id +� +(x0) ̸= 0, so it encompasses +non-vanishing Jacobian constraints for the traditional definition of a Jacobian. +Following the strategy introduced in [AC22] we define the admissible set for an integer P +A(P) := +� +(u1, u2, · · · , uP ) ∈ H(Ω)P : ∀x ∈ Ω \ ∪i,jΓij, +(u1, u2, · · · , uP ) are non vanishing Jacobian solutions} . +For a geometrical reason that will be discussed later, we introduce the notation +d⋆ = +� +d +when d = 2, 4, 8 +d + 1 +otherwise when d ≥ 3. + +JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS +5 +2.2. Main result. The main result of this article is the following. +Theorem 6. Under assumption 1 and assumption 2, when P ≥ +� +d+d⋆+1 +α +� +, A (P) is an open and +dense subset of H (Ω)P , where H(Ω) is defined in equation (2.5). +Remark. This theorem is an extension to the piecewise regular context [AC22, Theorem 2.3]. In +terms of the result itself, the number P obtained in [AC22] is +� 2d +α +� +, thus our result requires a slightly +larger number than the regular case; however the number of subdomain where the coefficients are +regular does not play a role. +A careful reader comparing [AC22, Theorem 2.3] and theorem 6 might notice that our result +applies to the whole domain, instead of a compact subset. Another simplification is that we need +not assume that the Dirichlet (or Neuman or Robin) boundary value problem associated to (2.4) +is well posed for our result to hold. +The proof is done in several steps. +Theorem 7. For any σ > 0 there exists ε > 0 such that for any x ∈ Ω \ ∪i,jΓij, there exists +d + 1 solutions denoted as ux +1, ux +2, · · · , ux +d+1 such that ux +i ∈ H1 (Ω) and Lux +i = 0 in Ω for i ∈ +{1, 2, · · · , d + 1}, and there holds +(2.6) +det J +� +ux +1, ux +2, · · · , ux +d+1 +� +(y) = det + + +∂1ux +1 +· · · +∂dux +1 +ux +1 +... +... +... +... +∂1ux +d+1 +· · · +∂dux +d+1 +ux +d+1 + + (y) > σ +for any y ∈ B (x, ε) ∩ Ωj, j ∈ {1, . . . , N + 1}. +This result is proved in section §3. It does not follow directly from classical unique continuation +arguments, because of the discontinuous nature of the coefficients of equation (2.4). +Choose σ = 1, and let ε be the corresponding ball radius. We may extract a finite cover of Ω +from ∪x∈Ω\∪i,jΓijB (x, ε), of cardinality smaller than, say, +� +ε−1diam (Ω) +�d + 1. As a result, +(2.7) +A +���diam (Ω) +ε +�d� ++ 1 +� +̸= ∅. +To reduce the cardinality of the required group of non-vanishing Jacobian solutions, and to prove +the density property we announced, we use a Whitney reduction lemma. +This strategy was used in [AC22], based on a method introduced in [GW75], and used the +Hölder continuity of the Jacobian map J. In our setting, J may be discontinuous across interfaces +Γij. +On the other hand, because of the divergence form of the principal part of the elliptic operator +L, a mixed-type (for lack of a better word) Jacobian map of the form +(A∇u · h1 + b · h1u, ∇u · h2, · · · , ∇u · hd, u) , +with appropriately chosen (h1, · · · , hd) ∈ C0,1 � +Ω; Rd×d� +is continuous. +Proposition 8. There exists a family of vector-valued functions F = f1, · · · , fd⋆ ∈ C0,1 � +Ω; Rd�d⋆ +, +such that +(1) For every x ∈ Ω, there holds rank (f1, · · · , fd⋆) (x) = d. +(2) On each Γij,|f1| = 1, f1 is normal to Γij, and f1 · fj = 0 for any j ≥ 2. +(3) For any u ∈ H (Ω) weak solution of equation (2.4), the map +(2.8) +Jf (u, F) := ((ADu + bu) · f1, Du · f2, · · · , Du · fd⋆, u) +satisfies Jf (u, F) ∈ C0,α � +Ω; Rd⋆+1� +. +This proposition is proved in section §4. + +6 +YVES CAPDEBOSCQ AND TIANRUI DAI +Remark. The vector f1 can be thought of as the extension of the normal vector and f2, · · · , fd⋆ as +the tangent vectors on each boundary Γij. Indeed, since A and b are only piecewise regular, only +the normal flux is continuous (and, in turn, Hölder continuous) across interfaces between any Ωi +and Ωj. This forces f2, · · · , fd⋆ to be tangent to the interface. A topological difficulty arises in +all dimensions, except 2, 4 and 8, which requires the introduction of an extra element to obtain +a full rank family of Lipschitz continuous tangent vectors. This classical result [BM58, Ker58] is +discussed further in section §B. +To untangle the dependence of Jf on u and F, we reformulate Jf as follows. +Proposition 9. We note Pd,d+1 ∈ Rd×(d+1) the projection from Rd+1 to Rd given by Pd,d+1 is +such that (Pd,d+1)ij = δij. We note Ed+1,d the extension from Rd to Rd+1 given by Ed+1,d is such +that (Ed+1,d)ij = δij. +Set +T : (Ω \ ∪ijΓij) × Rd+1 +→ +L +� +R(d+1)×(d⋆+1)� +(x, ζ1, · · · , ζd⋆+1) +→ +� AT (x)Pd,d+1ζ1 +Pd,d+1ζ2 +· · · +Pd,d+1ζd⋆ +Pd,d+1ζd⋆+1 +b(x)Pd,d+1ζ1 +0 +· · · +0 +1 +� +For any x ∈ Ω \ ∪ijΓij, and for any (ξ1, · · · , ξd⋆) ∈ +� +Rd�d⋆ +there holds +(2.9) +rank (T (x, Ed+1,dξ1, · · · , Ed+1,dξd⋆, ed+1)) = rank (ξ1, · · · , ξd⋆) + 1. +Furthermore, we have +Jf (u, F) = (∂1u, · · · , ∂du, u) T (x, Ed+1,df1, · · · , Ed+1,dfd⋆, ed+1) , +where Jf is given by equation (2.8). +Proof. The last column of T (x, Ed+1,dξ1, · · · , Ed+1,dξd⋆, ed+1) is ed+1 ̸= 0. Together with the fact +that Pd,d+1Ed+1,d = Id, the identity matrix in Rd, the first d⋆ columns are +� +AT (x)ξ1 +ξ2 +· · · +ξd⋆ +b(x)ξ1 +0 +· · · +0 +� +Thanks to the uniform ellipticity of A, AT ξ1 · ξ1 > λ |ξ1|2 and equation (2.9) follows. The identity +involving Jf is straightforward. +□ +The Whitney reduction argument is as follows. +Lemma 10. Given P ∈ N large enough so that A (P) ̸= ∅, define +F : Ω \ ∪i,jΓij × Rd⋆+1 +→ +RP +(x, ζ) +→ +Fxζ +(2.10) +where +Fxζ := + + +(∂1u1, · · · , ∂du1, u1) +... +(∂1uP , · · · , ∂duP , uP ) + + T (x, Ed+1,df1, · · · , Ed+1,dfd⋆, ed+1) ζ, +with {u1, · · · , uP } ∈ A (P) . Then Fx has rank d + 1. For P > d+d⋆+1 +α +, and a ∈ RP −1, let Pa be +the map from RP to RP −1 defined by +Pa(y) = (y1 − a1yP , · · · , yP −1 − aP −1yP ) +for y = (y1, y2, · · · , yP ) ∈ RP . +Let G = +� +a ∈ RP −1|Pa ◦ Fx has rank d + 1 +� +, then |RP −1 − +G|Lebesgue = 0. +The proof of this lemma is given in section A.3. We then translate this reduction result for Jf +into its counterpart for our original target map J. + +JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS +7 +Lemma 11. Given any P > d+d⋆+1 +α +, and any {u1, · · · , uP} ∈ A (P), let G be the set of a = +(a1, · · · , aP −1) ∈ RP −1 such that for all x ∈ Ω \ ∪ijΓij there holds +rank J (u1 − a1uP , · · · , uP −1 − aP −1uP ) (x) = d + 1. +Then +��RP −1 \ G +�� +lebesgue = 0. +Proof. Given F = {f1(x), · · · , fd⋆(x)} ∈ +� +C0,1 � +Ω; Rd��d⋆ +as defined in proposition 8, for x ∈ +Ω \ ∪ijΓij, let Fx : Rd⋆+1 → RP as given in equation (2.10). Thanks to lemma 10, we have rank +Fx = d + 1 and for a.e a ∈ RP −1, Pa ◦ Fx has rank d + 1 which means +rank + + +Jf (u1 − a1uP , F) +... +Jf (uP −1 − aP −1uP , F) + + (x) = d + 1. +Denote J =J (u1 − a1uP , · · · , uP −1 − aP −1uP) and T = T (x, Ed+1,df1, · · · , Ed+1,dfd⋆, ed+1) so +that + + +Jf (u1 − a1uP , F) +... +Jf (uP −1 − aP −1uP , F) + + = J T . +Then, rank(J T ) = d + 1, and since rank (J T ) ≤ min (rank (J ) , rank(T )), we conclude that +d + 1 ≥ rank (J ) ≥ d + 1, which proves that rank(J ) = d + 1. +□ +With the above lemma, we have now returned to a familiar setting, where no further com- +plications due to the discontinuous nature of the coefficients arise. The rest of the proof of the +theorem 6 now follows an argument similar to the one found in [AC22, Theorem 2.3], and a variant +of the argument above to prove that the set A (P) is open, which we include in section §C. +2.3. Application on an example. We revisit example 1, namely the reconstruction of the con- +ductivity from the knowledge of the solution to illustrate how our result naturally extends existing +results derived for uniformly regular parameters. In addition to assumption 1 and assumption 2, +suppose that b = c = q = 0, A = γId , where γ is scalar valued function, and α = 1. +Proposition. Given P > 0 such that, A (P) ̸= ∅, and {u1, · · · , uP } ∈ A (P) . For each ℓ ∈ +{1, · · · , P}, uℓ ∈ BV (Ω), and its singular part is a jump set. The union over ℓ of these jump sets +is ∪i,jΓij. +Given x ∈ Γij, let n (x) be the normal pointing from Ωi to Ωj, that is, x + tn (x) ∈ Ωi for t < 0 +and x + tn (x) ∈ Ωj for t > 0, provided t is small enough. +Let up be such that limt→0+ |Dup (x + tn)| = maxk∈{1,··· ,P } limt→0+ |Duk (x + tn)|. Then +lim +t→0+ ln |Dup (x + tn (x)) · n (x)| − ln |Dup (x − tn (x)) · n (x)| = − [ln γ (x)]ij , +where +[ln γ (x)]ij = lim +h→x +h∈Ωj +ln γ (h) − lim +h→x +h∈Ωi +ln γ (h) . +The absolutely continuous part of D ln γ with respect to the Lebesgue measure is determined by +������� +Du1 +... +DuP +������� +D ln γ = +������� +∆u1 +... +∆uP +������� +on Ωk, +k = {1, · · · , N + 1} . +Remark. In particular, γ is uniquely determined up to a multiplicative constant. +Proof. Thanks to lemma 3, and proposition 8, there holds +Jf (uℓ, F) = (γDuℓ · f1, Duℓ · f2, · · · , Duℓ · fd∗, uℓ) ∈ C0,1 (Ω) . + +8 +YVES CAPDEBOSCQ AND TIANRUI DAI +Because rank F = d, the discontinuities of Duℓ are included in the Γij. For any given ℓ, it may +not correspond exactly, to the entire ∪i,jΓij since Duℓ ·f1 may possibly vanish on these interfaces; +however, +rank [Jf (u1, F) , · · · , Jf (uP , F)] (x) = d + 1 for all x ∈ Ω, +thus in particular, rank [(γDu1 · f1, · · · , γDuP · f1)] (x) = 1 on ∪i,jΓij, thus the set is indeed the +whole ∪i,jΓij . Equipped with all interfaces Γij, and a set of associated normal vectors, we may +recover the jumps between the different regions. +Thanks to proposition 8, γDuℓ · f1 ∈ C0,1 (Ω) ,and f1 (x) = n(x) on each Γij . +Since limt→0+ |Dup (x + tn)| = maxk∈{1,··· ,P } limt→0+ |Duk (x + tn)| ,and not all such limit can +be zero by since (u1, · · · , uP) ∈ A (P) , limt→0 ln |(γDup) (x + tn) · n| ∈ R. In particular, +lim +t→0+ ln |(γDup) (x + tn) · n| − ln |(γDup) (x − tn) · n| = 0, +and therefore +lim +t→0+ ln |Dup (x + tn) · n| − ln |Dup (x − tn) · n| = − [ln γ (x)]ij . +The final identity is obtained exactly as in the regular case. +□ +3. Proof of theorem 7 +We construct a group of solutions which satisfies the Jacobian constraint locally within one +subdomain Ωi and extend them one subdomain at a time. To do this rigorously, we introduce +a construction map, and an associated index map, defining the order in which the extension is +performed. For any permutation i : {1, · · · , N + 1} → {1, · · · , N + 1}, we denote ΩIk = Ωi(1) ∪ +· · · ∪ Ωi(k) for k ∈ {1, · · · , N + 1}. We have the following definition: +Definition 12. We say a permutation i : {1, · · · , N + 1} → {1, · · · , N + 1} is a construction map, +if the following holds : +For any j ∈ {2, · · · , N + 1}, there exists a unique k (j) ∈ {1, · · · , j − 1} such that +∂Ωi(j) ∩ ∂ΩIj−1 = Γi(j)i(k(j)) . +With i a construction map comes ji = +� +ji +1, · · · , ji +N+1� +the map index of i defined as follows: +(1) for every s ∈ {1, · · · , N + 1}, ji +s ∈ {1, · · · , N + 1}N+1, +(2) The starting map ji +1 satisfies ji +1 = (i (1) , · · · , i (1)) +(3) For any s ∈{2, · · · , N + 1} we have (ji +s)i(ℓ) = +� +ji +s−1� +i(ℓ) if ℓ ≤ s − 1, (ji +s)i(ℓ) = i (ℓ) if +s = ℓ, and ℓ ≥ s + 1, (ji +s)i(ℓ) is defined inductively: +(ji +s)i(ℓ) = (ji +s)i(k(ℓ)) . +Thanks to assumption 1, for any i ∈ {1, · · · , N + 1}, we can always find a construction map i +with i (1) = i. A simple example is: +Example 13. Let Ω = Ω1 ∪Ω2 ∪Ω3 ∪Ω4 ∪Ω5 in figure 3.1. Then i1 : {1, 2, 3, 4, 5} → {2, 3, 1, 5, 4} +and i2 : {1, 2, 3, 4, 5} → {2, 1, 5, 4, 3} are two different construction maps with i1 (1) = i2 (1) = 2. +We have +Remark. +ji1 = + + + + + + + + + + + +{2, 2, 2, 2, 2} +{2, 2, 3, 2, 2} +{1, 2, 3, 1, 1} +{1, 2, 3, 5, 5} +{1, 2, 3, 4, 5} + + + + + + + + + + + +and ji2 = + + + + + + + + + + + +{2, 2, 2, 2, 2} +{1, 2, 2, 1, 1} +{1, 2, 2, 5, 5} +{1, 2, 2, 4, 5} +{1, 2, 3, 4, 5} + + + + + + + + + + + +. +Note that for any construction map i, there holds jN+1 +i += {1, · · · , N + 1}. + +JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS +9 +5 +1 +2 +3 +4 +Figure 3.1. A 4 inclusion configuration. +In the sequel, it would be convenient to assume that the Dirichlet problem boundary value +problem associated to L is well posed in Ω, as it would allow us to control the norm of solutions +by their boundary traces. In fact, well-posedness for a large family of sub-problems will be used. +We denote L [i1, i2, · · · , iN+1] for i1, · · · , iN+1 ∈ {1, · · · , N + 1}, the second order elliptic operator +with given coefficients Aij, bij, cij, qijin Ωj. We shall use the following lemma. +Lemma 14. There exists some ϑ > 0 such that for any κ ∈ (0, ϑ), all Dirichlet boundary value +problems associated with L [i1, · · · , iN+1] + κ where i1, · · · , iN+1 ∈ {1, · · · , N + 1} are well-posed +in Ω. +The proof of this lemma is given in section A.1. For any κ ∈ (0, ϑ) fixed, we first prove the +theorem 7 for L+κ. To simplify notations, we write L for L+κ . Thanks to lemma 14, the Dirichlet +boundary value problem associated with L [i1, · · · , iN+1] is well-posed for any i1, · · · , iN+1 ∈ +{1, · · · , N + 1}. In the last step, we shall revert to the original operator, now L − κ, to prove the +theorem 7 using the smallness of κ and the regularity of the coefficients. +The proof of theorem 7 relies on a series of lemmas. To start the construction, we exhibit +functions satisfying the requirement (2.6), which satisfy Lux +i = 0 in a neighbourhood of x. +Lemma 15. Given j ∈ {1, · · · , N + 1} , for any σ ∈ (0, 1), there exists ε ∈ (0, 1) depending +on λ, σ,d and κ given by lemma 14 only such that for any point x ∈ Ω \ ∪i,jΓij, there exist +ux +1, ux +2, · · · , ux +d+1 ∈ +� +H1 (Ω) +�d+1 such that for i ∈ {1, 2, · · · , d + 1}, L [j, · · · , j] ux +i = 0 in Ω. +Moreover there holds +det J (ux +1, ux +2, · · · , ux +d) (y) > σ +for any y ∈ B (x, ε) ∩ Ω. +Proof. Fix j = 1. Consider x = 0 ∈ Ω, and Bx = B (0, 2diamΩ) a ball centred in x containing Ω. +In the sequel C represents any constant depending on d and λ given in assumption 2, and κ given +by lemma 14, only. Note that the coefficients of L [1, · · · , 1], namely A1, b1, c1, q1 + κ, are Hölder +continuous on Bx. Consider the constant coefficient partial differential operator +(3.1) +L0 : v → −div (A1 (0) Dv + b1 (0) v) + c1 (0) · Dv + (q1 (0) + κ) v. +For i = 1, · · · , d, let ui = f (xi) be the solution of constant coefficients ODE + + + + + +− (A1)ii (0) f ′′ (t) + +� +ci +1 (0) − bi +1 (0) +� +f ′ (t) + (q1 (0) + κ) f (t) = 0 +for all t ∈ R, +f ′(0) = 1, +f(0) = 0. +Let ud+1 = f (x1) be the solution of the following second-order constant coefficients ODE initial +value problem: + + + + + +− (A1)11 (0) f ′′ (t) + +� +c1 +1(0) − b1 +1(0) +� +f ′ (t) + (q1 (0) + κ) f (t) = 0 +for all t ∈ R, +f ′(0) = 0, +f(0) = 1. + +10 +YVES CAPDEBOSCQ AND TIANRUI DAI +We observe that, for all i ∈ {1, · · · , d + 1} , L0ui = 0 in Ω, and det J (u1, · · · , ud+1) (0) = 1. We +now turn to solutions for the boundary value problem variable coefficients. Set Vε = B(0, 2ε) ⊂ Bx +for some ε ∈ +� +0, min +� 1 +2, 1 +2diamΩ +�� +to be chosen later. We shall construct ux +1, · · · , ux +d+1 in H1 +loc (Bx), +the required construction being obtained by taking the restriction to Ω. Consider the d+1 Dirichlet +problems +� +L [1, · · · , 1] vj = 0 +in Vε, +vj = uj +on ∂Vε, +j = 1, · · · , d + 1. +Note that this problem is well posed for ε small enough. Thanks to lemma 3, v1, v2, · · · , vd+1 +are well defined and in C1,α (Vε). +Set for i = 1, · · · , d + 1, δ1 +i := − (A1(x) − A1(0)) Dui − +(b1(x) − b1 (0)) ui, and δ2 +i := (c1(x) − c1 (0)) · Dui + (q1 (x) − q1 (0)) ui. Then, for each i, +L [1, · · · , 1] (ui − vi) = div +� +δ1 +i +� ++ δ2 +i , +and +∥Dui − Dvi∥C0,α� +Vε +� ≤ C +���δ1 +i +�� +C0,α� +Vε +� + +��δ2 +i +�� +C0,α� +Vε +� +� +. +In particular, +(3.2) +∥Dui − Dvi∥C0, α +2 � +Vε +� ≤ C +���δ1 +i +�� +C0,α� +Vε +� + +��δ2 +i +�� +C0,α� +Vε +� +� +diam (Vε) +α +2 ≤ Cε +α +2 . +By an integration by part, and Poincaré’s inequality (with a constant chosen to be valid for any +ε ∈ (0, 1)), +∥Dui − Dvi∥L2� +Vε +� ≤ C +���δ1 +i +�� +L2� +Vε +� + CPoincaré +��δ2 +i +�� +L2� +Vε +� +� +. +We compute, using the Hölder regularity of the parameters, +� +Vε +� +δ1 +i +�2 dx + +� +Vε +� +δ2 +i +�2 dx ≤ Cεd+2α. +Inserting this estimate in (3.2) we obtain +∥ui − vi∥H1 +0 +� +Vε +� ≤ Cεd/2+α, +and using that for any x, y ∈ Vε and any f,there holds ∥f∥∞ ≤ ∥f∥C0, α +2 � +Vε +� diam (Vε) +α +2 + +|Vε|− 1 +2 ∥f∥L2� +Vε +� ,we conclude that +∥Dui − Dvi∥L∞(Vε) ≤ Cεα. +Because of assumption 2, the operator L [1, · · · , 1] enjoys a Unique Continuation Property on a +Bx. Thus, for each i there exists ux +i ∈ H1 (Bx) ∩ C1,α +loc (Bx) such that L [1, · · · , 1] ux +i = 0 on Bx, +and ∥ux +i − vi∥L2(Vε) < ε. Thanks to lemma 3 this implies ∥Dux +i − Dvi∥L∞(Bε) ≤ Cεα, where +Bε = B (0, ε), and in turn, +∥Dui − Dux +i ∥L∞(Bε) ≤ Cεα. +Since det J is multi-linear, +det +� +J (u1, · · · , ud+1) − J +� +ux +1, · · · , ux +d+1 +�� +≤ (d + 1) +�d+1 +� +i=1 +|Dui| + |Dux +i | +�d +max +1≤i≤d+1 |Dui − Dux +i | +Therefore +sup +Bε +det +��J (u1, · · · , ud+1) − J +� +ux +1, · · · , ux +d+1 +��� ≤ Cεα. +Since det J (u1, u2, · · · , ud+1) (0) = 1, for any σ ∈ (0, 1) there exists ε, depending λ, d, σ and κ +only such that +min +Bε det J +� +ux +1, · · · , ux +d+1 +� +> σ. +□ + +JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS +11 +The following lemma extends a solution across an interface. +Lemma 16. Let i be a construction map as defined in definition 12, and ji the associated index +map. Given k ∈ {1, . . . , N + 1}, write Γk = ∂ΩIk−1 ∩ ∂Ωi(k), and Lk = L +� +ji +k� +. Let +Wk = +˚ +∪{ℓ:(jik)ℓ=(jik−1)ℓ}Ωℓ. +In other words, Wk is the open set where coefficients of Lk are almost everywhere the same as +those of Lk−1. Suppose that u ∈ H1 (Wk) is a weak solution of Lku = Lk−1u = 0 in Wk. For any +δ > 0, there exists an open set U, such that Wk ∪ Γk ⊂ U ⊂ Ω and v ∈ H1(U) such that Lkv = 0 +in U and +∥u − v∥H1(Wk) < δ. +Proof. Suppose that Γk = Γ1i(k), that is, the subdomain within ΩIk for whom Γk is a connected +component of its boundary is Ω1. Write i(k) = k, and ω1 = Ω1 ∩ {x : d (x, Γk) < d0}, where d0 is +given by equation (2.1). Thanks to assumption 1, there exists a C2 diffeomorphism ψk : U1,k → +V1,k, Γk �→ ∂B1, where U1,k and V1,k are neighbourhoods of Γk and ∂B1. Take η > 0 small enough +such that ψ−1 +k +(∂B1−η) ⊂ U1,k ∩ ω1. Take t ∈ +� 1 +2, 1 +� +and set +U t := +� +ψ−1 +k x : tx ∈ ∂B1 \ ∂B1−η +� += +� +ψ−1 +k x : x ∈ ∂B 1 +t \ ∂B 1 +t (1−η) +� +⊂ U1,k. +An example of such a construction is illustrated in figure 3.2. In what follows, C is any constant, +which may change from line to line, depending on Ω, Ω1, d, λ, κ and +��ψ−1 +k +�� +C2. Write Y := U t∩Ω1, +G := U t ∩ Ωi(k). Define +ut(x) = u +� +ψ−1 +k +(tψk (x)) +� +∈ H1 � +U t� +. +There exists some η0 > 0, such that for any 0 < η < η0, and any t ∈ +� 1 +2, 1 +� +there holds +(3.3) +∀u ∈ H1 +0 +� +U t� +, ⟨Lku, u⟩H−1(Ut),H1 +0 (Ut) ≥ 1 +3λ ∥u∥2 +H1 +0 (Ut) . +We establish this claim in section A.2. +5 +1 +2 +3 +4 +Figure 3.2. For the example shown in figure 3.1, this illustrates an extension +across Γ15, the dashed line. The green area correspond U t +1,5, with t = +1 +10. In this +example, the Dirichlet problem is L [1, 2, 3, 5, 5]u = 0. +Consider the Dirichlet boundary value problem in U t +� +Lv = 0 +in U t, +v = ut +on ∂U t, +which is well-posed thanks to (3.3). We estimate +� +L +� +ut − v +� +, ut − v +� +H−1(Ut)×H1(Ut) += +� +Lut, ut − v +� +H−1(Ut)×H1(Ut) +≤ +|J0| + |J1| + |J2| , +where +J0 = +� +Y +AD +� +ut − u +� +· D +� +ut − v +� ++ +� +ut − u +� +(b + c) · D +� +ut − v +� ++ (q + κ) +� +ut − u +� � +ut − v +� +dx, + +12 +YVES CAPDEBOSCQ AND TIANRUI DAI +J1 = +� +G +ADut · D +� +ut − v +� ++ but · D +� +ut − v +� ++ c · Dut � +ut − v +� ++ (q + κ) ut � +ut − v +� +dx, +and +J2 = − +� +Γk +� +ADut + but� +· n +� +ut − v +� +dS. +Thanks to the Hölder regularity of u and Du, see lemma 3, +��ut (x) − u(x) +�� = +��u +� +ψ−1 +k +(tψk (x)) +� +− u +� +ψ−1 +k +(ψk (x)) +��� ≤ C |1 − t|α ∥u∥C0,α(ω1) , +Similarly, +��Dut (x) − Du(x) +�� ≤ C |1 − t|α ∥Du∥C0,α(ω1) , +and altogether +|J0| ≤ C |1 − t|α ∥u∥C1,α(ω1) +��v − ut�� +H1(Ut) . +To estimate J1, we write +|J1| ≤ C +��ut�� +H1(G) +��v − ut�� +H1(Ut) , +and by interpolation, +��ut�� +H1(G) ≤ C +� +∥u∥L∞(Ω1) + ∥Du∥L∞(Ω1) +� +|G| +1 +2 ≤ C +� +∥u∥L∞(Ω1) + ∥Du∥L∞(Ω1) +� +(1 − t) +1 +2 . +Thus altogether, writing β = min +� +α, 1 +2 +� +. +(3.4) +|J0| + |J1| ≤ C ∥u∥C1,α(Ω1) +��v − ut�� +H1(Ut) |1 − t|β . +Note that +J2 +≤ +∥(ADu + bu) · n∥L2(Γk) +��ut − v +�� +L2(Γk) +≤ +C +� +∥u∥L∞(Ω1) + ∥Du∥L∞(Ω1) +� ��ut − v +�� +L2(Γk) . +(3.5) +Note that for every x ∈ Γk, ψ−1 +k +� 1 +t ψk (x) +� +∈ ∂U t. Since v = ut on ∂U t, we find on Γk +� +ut − v +� +(x) += +� +ut − v +� � +ψ−1 +k +◦ ψk (x) +� +− +� +ut − v +� � +ψ−1 +k +�1 +t ψk (x) +�� += +� 1 +1 +t +D +�� +ut − v +� +◦ ψ−1 +k +� +(θx) · xdθ, +Applying Cauchy–Schwarz, we find +��� +ut − v +� +(x) +�� ≤ C |1 − t| +1 +2 +�� 1 +1 +t +��D +�� +ut − v +� +◦ ψ−1 +k +� +(θx) +��2 dθ +� 1 +2 +, +and integrating over Γk +��ut − v +��2 +L2(Γk) +≤ +C |1 − t| +� +Γk +� 1 +1 +t +��D +�� +ut − v +� +◦ ψ−1 +k +� +(θx) +��2 dθdx +≤ +C |1 − t| +��ut − v +��2 +H1(G) . +(3.6) +In turn, combining (3.4), (3.5) and (3.6), +��� +L +� +ut − v +� +, ut − v +��� ≤ C |1 − t|β ∥u∥C1,α(Ω1) +��ut − v +�� +H1(Ut) . +Thanks to (3.3), this implies +��ut − v +�� +H1(Ut) ≤ C |1 − t|β ∥u∥C1,α(Ω1) . + +JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS +13 +For every fixed t consider the following system +(3.7) + + + + + + + + + + + + + +LkS = 0 +in Ω \ ψ−1 +k +� +∂B 1 +t (1−η) +� +S = 0 +on ∂Ω +[S] = u − v +on ψ−1 +k +� +∂B 1 +t (1−η) +� +[(ADS + bS) · n] = (ADu + bu) · n − (ADv + bv) · n +on ψ−1 +k +� +∂B 1 +t (1−η) +� +, +Where [·] denotes the jump across the boundary, thanks to lemma 14, this problem is well posed, +and there exists some S ∈ H1 � +Ω \ ψ−1 +k +� +∂B 1 +λ (1−η) +�� +solution of equation (3.7). Moreover there +holds: +∥S∥ +H1 +� +Ω\ψ−1 +k +� +∂B 1 +λ (1−η) +�� +≤ +C +� +∥u − v∥ +H1/2 +� +ψ−1 +k +� +∂B 1 +λ (1−η) +�� ++ ∥(AD (u − v) + b (u − v)) · n∥ +H−1/2 +� +ψ−1 +k +� +∂B 1 +λ (1−η) +�� +� +≤ +C ∥u − v∥H1(Y ) . +Using the triangle inequality, this yields, +∥S∥ +H1 +� +Ω\ψ−1 +k +� +∂B 1 +λ (1−η) +�� +≤ +C +���ut − u +�� +H1(Y ) + +��ut − v +�� +H1(Y ) +� +≤ +C (1 − t)β ∥u∥C1,α(Ω1) . +Take ˜vt = v1Ut + 1 +(Wk\Ω1)∪ψ−1 +k +� +B 1 +t (1−η) +�u + S1 +Ω\ψ−1 +k +� +∂B 1 +t (1−η) +�. +By construction, we have +˜vt ∈ H1 (Wk ∪ Γk ∪ U t) and there holds ∥˜vt − u∥H1(Wk) ≤ C (1 − t)β ∥u∥C1,α(Ω1). The conclu- +sion follows choosing t close enough to 1, and U = Wk ∪ Γk ∪ U t. +□ +The third step is to extend the solution to the whole Ω. +Lemma 17. With the notations of (16), for any ε > 0, there exists a weak solution of v ∈ H1 (Ω) +of Lkv = 0 in Ω such that ∥v − u∥H1(U) < ε. +Proof. Note that on Vk = Ω \ Wk the coefficients of Lk are not discontinuous, and the Unique +Continuation Property holds. As a result there exists a sequence of functions (un)n∈N ∈ H1 (Vk)N +such that +Lkun = 0 on Vk +and +∥un − u∥H1(U∩Vk) ≤ 1 +n. +which implies that +∥un − u∥H1/2(∂Wk) ≤ ∥un − u∥H1/2(∂(U∩Vk)) ≤ ∥un − u∥H1(U∩Vk) ≤ C +n . +Let ν be the outer normal vector of ∂Wk and let +F1 : H1 (Wk) +→ +H−1/2 (∂Wk) +u +�→ +� +˜A|WkDu + ˜b|Wku +� +· ν +and +F2 : H1 (U \ Wk) +→ +H−1/2 (∂Wk) +u +�→ +� +˜A|VkDu + ˜b|Vku +� +· ν + +14 +YVES CAPDEBOSCQ AND TIANRUI DAI +Since u ∈ H1 (U) is a weak solution of Lku = 0 in U, there holds F1 (u) = F2 (u) on ∂ΩIk. As a +result, +∥F1 (u) − F2 (un)∥H−1/2(∂Wk) +≤ +∥F2 (u) − F2 (un)∥H−1/2(∂Wk) +≤ +∥un − u∥H1(U∩Vk) +≤ +C +n . +Consider the following system in Ω +(3.8) + + + + + + + + + +Lksn = 0 +in Ω \ ∂Wk +sn = 0 +on ∂Ω +[sn] = u − un +on ∂Wk +[(ADsn + bsn) · ν] = F1 (u) − F2 (un) +on ∂Wk. +lemma 14 implies that there exists sn ∈ H1 (Ω \ ∂Wk), a weak solution of equation (3.8) and there +holds +(3.9) +∥sn∥H1(Ω\∂Wk) ≤ C +� +∥F1 (u) − F2 (un)∥H−1/2(∂Wk) + ∥u − un∥H1/2(∂Wk) +� +≤ C +n . +Let vn = sn1Ω\∂Wk + u1U + 1Vkun. By construction, vn ∈ H1 (Ω) is a weak solution of equa- +tion (2.4). Moreover, we have +∥vn − u∥H1(U) ≤ ∥sn∥H1(Ω\∂Wk) + ∥un − u∥H1(U\Wk) ≤ C +n , +and the conclusion follows. +□ +We now turn to the proof of theorem 7. +(2, 2, 2, 2, 2) +5 +1 +2 +3 +4 +(2, 2, 3, 2, 2) +5 +1 +2 +3 +4 +(1, 2, 3, 1, 1) +5 +1 +2 +3 +4 +(1, 2, 3, 5, 5) +5 +1 +2 +3 +4 +(1, 2, 3, 4, 5) +5 +1 +2 +3 +4 +Figure 3.3. A construction following the construction map i : {1, 2, 3, 4, 5} → +{2, 3, 1, 5, 4}. Every colour represents one set of regular coefficients. At each step, +all the subdomains within which the construction has not been performed have +the same parameters as the subdomain where the solution is constructed. +Proof of theorem 7. Given σ > 0 and x ∈ Ω \ ∪i̸=jΓij, we choose a construction map i ∈ SN+1 +such that the starting point x is in the first set, x ∈ ΩI1. Using lemma 15 for the first step, and +then applying lemma 16 and lemma 17 inductively, with +L1 += +L +� +ji +1� +... +LN+1 += +L +� +ji +N+1� += L, +the conclusion follows. +□ +We now turn to original operator (which is represented by Loriginal = L−κ), to prove theorem 7. + +JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS +15 +Proof of theorem 7 for Loriginal. Thanks to theorem 7 for L = Loriginal + κ there holds +Claim 18. For any σ > 0, there exists ε > 0 such that for any x ∈ Ω \ ∪i̸=jΓij there exists +d + 1 solutions denoted as ux +1, ux +2, · · · , ux +d+1 such that ux +i ∈ H1 (Ω) and Lux +i = 0 in Ω for i ∈ +{1, 2, · · · , d + 1}, and +��det J +� +ux +1, ux +2, · · · , ux +d+1 +��� (y) > σ, for any y ∈ B (x, ε) ∩ Ωj, j ∈ {1, . . . , N + 1} . +If the Dirichlet boundary value problem associated with Loriginal is well-posed, then for any +i ∈ {1, . . . , d + 1} and any x ∈ Ω \ ∪i̸=jΓij, consider the following Dirichlet boundary value +problem: +� +Loriginalvx +i = 0 +in Ω +vx +i = ux +i +on ∂Ω. +Then, vx +i − ux +i ∈ H1 +0 (Ω) satisfies Loriginal (vx +i − ux +i ) = κux +i in Ω, and thanks to the well-posedness +of Loriginal, we have +∥vx +i − ux +i ∥H1 +0 (Ω) ≤ Cκ, +where the finite constant C is independent of x. Thanks to the regularity of Loriginal in Ωj, we +have +∥vx +i − ux +i ∥C1,α(Ωj) ≤ Cκ. +Take κ small enough (since κ ∈ (0, ϑ) is chosen arbitrarily ) and take a corresponding ε given in +claim 18 for L, thanks to the multi-linearity of det J, we conclude that +��det J +� +vx +1, ux +2, · · · , vx +d+1 +��� (y) > σ +for any y ∈ B (x, ε) ∩ Ωj, j ∈ {1, . . . , N + 1}. +If the Dirichlet boundary value problem associated with Loriginal is not well-posed, the kernel +of the solution map, written ker (Loriginal) to avoid introducing additional notations, is finite +dimensional, and not empty. For any x ∈ Ω \ ∪i̸=jΓij and any i ∈ {1, . . . , N + 1}, take +ux +i = u1 + u2 +where u1 ∈ ker(Loriginal) ⊂ H1 +0 (Ω) ⊂ L2 (Ω) , u2 ∈ ker (Loriginal)⊥ ⊂ L2 (Ω). By the Fredholm +alternative, there exists a unique v2 ∈ H1 (Ω) such that v2 − u2 ∈ H1 +0 (Ω)∩ker(Loriginal)⊥ satisfies +� +Loriginal (v2 − u2) = −Loriginalu2 = κu +in Ω, +v2 − u2 = 0 +on ∂Ω. +Furthermore, ∥v2 − u2∥H1 +0 (Ω) ≤ Cκ. Choose vx +i = u1 + v2, which satisfies∥vx +i − ux +i ∥H1(Ω) ≤ Cκ. +Taking κ small enough, thanks to the regularity of the coefficients in each subdomain and the +multi-linearity of det J, we conclude that +��det J +� +vx +1, ux +2, · · · , vx +d+1 +��� (y) > σ. +□ +4. Proof Of Proposition 8 +We recall the definition of the geometric complement of an open set Ω ⊂ Rd, which is the +smallest open set Π ⊂ Rd such that Ω ⊂ Π and the genus of Π equals to zero. +Definition 19. Given any open set U ⊂ Ω, we write gU = # {j ∈ {1, · · · , N + 1} : Ωj ⊂ U} which +is the number of pieces contained in U. By construction, we have gΩ = N + 1. +Lemma 20. Set h1 = (x1, · · · , xd) on Sd−1. When d = 2, 4, or 8, there exists {h2, · · · , hd} ∈ +� +C1 � +Sd−1; Rd��d−1 such that (h1, h2, · · · , hd) ∈ SOd +� +Sd−1� +where SOd refers to the real unitary +matrices with positive determinant. +Otherwise, d ≥ 3 there exists {h2, · · · , hd+1} ∈ +� +C1 � +Sd−1; Rd��dsuch that (h1, h2, · · · , hd+1) ∈ +SOd+1 +� +Sd−1� +. +This lemma is proved in section §B. + +16 +YVES CAPDEBOSCQ AND TIANRUI DAI +4.1. Proof of Proposition 8 when d = 2, 4 or 8. +Proof. Let Πi be the geometric complement of Ωi, where i ∈ {1, · · · , N}. There exists a C1,1 +diffeomorphism Hi : B2 +Hi +→ Πi, which induces a C0,1 bijection on the vector fields: DHi : +C0,1 � +B2; Rd� +→ C0,1 � +Πi; Rd� +. It is a map which maps SOd to SOd since the degree of Hi is +either 1 or −1 and moreover it maps the tangent vectors (respectively the normal vector) on the +sphere to the tangent vectors (respectively the normal vector) on ∂Πi, which is the outer boundary +of Ωi. Take Bri ⊂ B2 ⊂ Br∗ +i , 1 < ri < 2 < r∗ +i , such that +gHi(Bri) + 1 = gΠi = gHi +� +Br∗ +i +� +and such that the genus of the Πi \ Hi (Bri) equals to the genus of Hi +� +Br∗ +i +� +\ Πi, and equals one. +In particular any Ωj, j ̸= i, contained in Πi are contained in Hi (Bri) and Hi +� +Br∗ +i +� +. Applying +lemma 20 with R = 2, when d = 2, 4, 8 , there exists {h1, · · · , hd} in B2 a group of C1 unit vector +fields on ∂B2. We construct {f1, · · · , fd} ∈ C0,1 � +Hi +� +Br∗ +i +� +\ Hi (Bri); SOd +� +as follows. +Criterion 21. +(1) {f1, · · · , fd} = {DHi (h1) , · · · , DHi (hd)} on ∂Πi. +(2) On Hi (∂Bri) and Hi +� +∂Br∗ +i +� +, let {f1, · · · , fd} = {e1, · · · , ed}. In other words, we have +(f1, · · · , fd) = Id on Hi (∂Bri) and Hi +� +∂Br∗ +i +� +. +(3) Since SOd is path connected, at each x ∈ ∂Bri there exists S ∈ C0,1 (∂Bri × [ri, r∗ +i ] ; SOd) +a path such that S (x, ri) = DH−1 +i +(Id), S (x, 2) = (h1, · · · , hd) +� +2 x +∥x∥ +� +and S (x, r∗ +i ) = +DH−1 +i +(Id). There holds +|S (x, r) − S (x, r′)| ≤ +C(d) +r∗ +i − ri +|r − r′| , +and +∥DxS (x, r)∥∞ ≤ C(d) +��DH−1 +i +�� +∞ ∥(Dh1, · · · , Dhd)∥∞ . +(4) For any r ∈ (ri, r∗ +i ), set (f1, · · · , fd) +� +Hi +� +r +x +∥x∥ +�� += DHi (Sx (r)) := DHi (S (x, r)). +In the construction above, for any x ∈ Hi +� +Br∗ +i +� +\ Hi (Bri), we have (f1, · · · , fd) (x) ∈ SOd. +Moreover since (f1, · · · , fd) is constructed by a composition of Lipschitz maps, {f1, · · · , fd} is of +class C0,1 in Hi +� +Br∗ +i +� +\ Hi (Bri) . Indeed, +∥fk (Hi (x)) − fk (Hi (y))∥ +≤ +����fk +� +Hi +� +∥x∥ x +∥x∥ +�� +− fk +� +Hi +�∥y∥ + ∥x∥ +2 +x +∥x∥ +������ ++ +����fk +� +Hi +�∥x∥ + ∥y∥ +2 +x +∥x∥ +�� +− fk +� +Hi +�∥y∥ + ∥x∥ +2 +y +∥y∥ +������ ++ +����fk +� +Hi +�∥x∥ + ∥y∥ +2 +y +∥y∥ +�� +− fk +� +Hi +� +∥y∥ y +∥y∥ +������ += +����DHi (Sx (∥x∥)) − DHi +� +Sx +�∥y∥ + ∥x∥ +2 +������ +(4.1) ++ +����DHi (Sy (∥y∥)) − DHi +� +Sy +�∥y∥ + ∥x∥ +2 +������ ++ +����DHi +� +Sx +�∥y∥ + ∥x∥ +2 +�� +− DHi +� +Sy +�∥y∥ + ∥x∥ +2 +������ +≤ +C (d) +� +1 +r∗ +i − ri ++ +��DH−1 +i +�� +∞ ∥(Dh0, · · · , Dhd−1)∥∞ +� +∥x − y∥ . +Note that for each i ∈ {1, · · · , N}, we have (f1, · · · , fd) = Id on ∂ +� +Hi +� +Br∗ +i +� +\ Hi (Bri) +� +. +Set +(4.2) +(f1, · · · , fd) = Id in Q := Ω \ ∪N +i=1Hi +� +Br∗ +i +� +\ Hi (Bri) + +JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS +17 +In each Hi +� +Br∗ +i +� +\ Hi (Bri), {f1, · · · , fd} is of class C0,1, continuous on ∂ +� +Hi +� +Br∗ +i +� +\ Hi (Bri) +� +and Lipschitz continuous in Q thanks to 4.2. Thus it is of class C0,1 in the whole Ω. +To conclude the proof of proposition 8, we now check that for every u ∈ H (Ω), such that +Lu = 0 in Ω, there holds Jf (u, F) = ((A∇u + bu) · f1, ∇u · f2, · · · , ∇u · fd, u) is of class C0,α in +Ω. Note that for each Hi +� +Br∗ +i \ Bri +� +, there exists only one j ∈ {1, · · · , N + 1} \ {i} such that +Ωj ∩ Hi +� +Br∗ +i \ Bri +� +̸= ∅ and Γij = Hi (∂B2) ⊂ Hi +� +Br∗ +i \ Bri +� +. Thanks to the continuity of the +flux (ADu + bu)·n = (ADu + bu)·f1 on Γij , the Lipschitz continuity of F ,the C0,α continuity of +Du, u, A and B in Ωi or Ωj, we conclude that Jf (u, F) is of class C0,α in each Hi +� +Br∗ +i \ Bri +� +and +Q. Moreover, we note that on each ∂Hi +� +Br∗ +i \ Bri +� +, the coefficients A and b are uniformly C0,α, +as they are in the interior of Ωi or Ωj. Therefore, we have Jf (u, F) is of class C0,α on ∂Q \ ∂Ω = +∪i∂Hi +� +Br∗ +i \ Bri +� +(Note that for different k and s, ∂Hk +� +Br∗ +k \ Brk +� +∩ ∂Hs +� +Br∗s \ Brs +� += ∅). In +particular it is continuous. Thus Jf (u, F) is of class C0,α on Ω. +□ +4.2. Proof of Proposition 8 for other dimensions. +Proof. Let Πi be the geometric complement of Ωi, where i ∈ {1, · · · , N}. There exists a C1,1 +diffeomorphism Hi : B2 +Hi +→ Πi, which induces a C0,1 bijection on the vector fields: DHi : +C0,1 � +B2; Rd� +→ C0,1 � +Πi; Rd� +. It is a map which maps SOd to SOd since the degree of Hi is +either 1 or −1 and moreover it maps the tangent vectors (respectively the normal vector) on the +sphere to the tangent vectors (respectively the normal vector) on ∂Πi, which is the outer boundary +of Ωi. Take Bri ⊂ B2 ⊂ Br∗ +i , 1 < ri < 2 < r∗ +i , such that +gHi(Bri) + 1 = gΠi = gHi +� +Br∗ +i +� +and such that the genus of the Πi \ Hi (Bri) equals to the genus of Hi +� +Br∗ +i +� +\ Πi, and equals +one. In particular any Ωj, j ̸= i, contained in Πi are contained in Hi (Bri) and Hi +� +Br∗ +i +� +. +For any M = (mij)(d+1)×(d+1) ∈ Rd+1 × Rd+1, we write P (M) = (mi,j)(d+1)×d . Thanks to +lemma 20, we construct {f1, · · · , fd+1} ∈ C0,1 � +Hi +� +Br∗ +i +� +\ Hi (Bri); Rd�d+1 +with rank equals to d +as follows: +(1) {f1, · · · , fd+1} = {DHi (h1) , · · · , DHi (hd+1)} on ∂Πi +(2) On Hi (∂Bri) and Hi +� +∂Br∗ +i +� +, let {f1, · · · , fd+1} = P (Id+1) +(3) There exists a C0,1 path S : ∂Bri × [ri, r∗ +i ] → SOd+1 such that S(x, ri) = DH−1 +i +(Id+1), +S(2) = DH−1 +i +(Hd (x)) (where Hd is given in equation (B.1)) and S (r∗ +i ) = DH−1 +i +(Id+1). +For any r ∈ (ri, r∗ +i ) and x ∈ ∂Bri, take (f1, . . . , fd) (Hi +� +rx +ri +� +) = P (DHi (S (x, r))). +Since for any x ∈ ∂Bri and r ∈ [ri, r∗ +i ] ,S (x, r) ∈ SOd+1. We have rank S (x, r) = d + 1. There- +fore, rank PS (x, r) = d. +As before, we conclude that {f1, · · · , fd+1} is also of class C0,1 in +Hi +� +Br∗ +i +� +\ Hi (Bri). +Note that for each i ∈ {1, · · · , N}, we have (f1, · · · , fd+1) = P (Id+1) on ∂ +� +Hi +� +Br∗ +i +� +\ Hi (Bri) +� +. +Set +(4.3) +(f1, · · · , fd+1) = P (Id+1) in Q := Ω \ ∪N +i=1Hi +� +Br∗ +i +� +\ Hi (Bri) +As we proved before, in each Hi +� +Br∗ +i +� +\ Hi (Bri), {f1, · · · , fd} is of class C0,1. It is continuous on +∂ +� +Hi +� +Br∗ +i +� +\ Hi (Bri) +� +and Lipschitz continuous in Q thanks to equation (4.3), and therefore of +C0,1 globally on Ω. +The rest of the proof is identical to that given when d = 2, 4 or 8. +□ +References +[ABC+08] +H. +Ammari, +E. +Bonnetier, +Y. +Capdeboscq, +M. +Tanter, +and +M. +Fink. +Electrical +imped- +ance tomography by elastic deformation. SIAM J. Appl. Math., 68(6):1557–1573, +2008. URL: +http://dx.doi.org/10.1137/070686408, doi:10.1137/070686408. + +18 +YVES CAPDEBOSCQ AND TIANRUI DAI +[AC18] +Giovanni S. Alberti and Yves Capdeboscq. Lectures on elliptic methods for hybrid inverse problems, +volume 25 of Cours Spécialisés [Specialized Courses]. Société Mathématique de France, Paris, 2018. +[AC22] +Giovanni S. Alberti and Yves Capdeboscq. Combining the Runge approximation and the Whit- +ney embedding theorem +in hybrid imaging. Int. Math. Res. Not., +2022(6):4387–4406, +2022. +doi:10.1093/imrn/rnaa162. +[ACdG+11] H. +Ammari, +Y. +Capdeboscq, +F. +de +Gournay, +A. +Rozanova-Pierrat, +and +F. +Triki. +Mi- +crowave imaging by elastic deformation. SIAM J. Appl. Math., +71(6):2112–2130, +2011. URL: +http://dx.doi.org/10.1137/110828241, doi:10.1137/110828241. +[ADCFV17] Giovanni Alessandrini, Michele Di Cristo, Elisa Francini, and Sergio Vessella. Stability for quantitative +photoacoustic tomography with well-chosen illuminations. Ann. Mat. Pura Appl. (4), 196(2):395–406, +2017. URL: http://dx.doi.org/10.1007/s10231-016-0577-4, doi:10.1007/s10231-016-0577-4. +[AGK+17] +Habib +Ammari, +Josselin +Garnier, +Hyeonbae +Kang, +Loc +Hoang +Nguyen, +and +Laurent +Seppecher. +Multi-Wave +Medical +Imaging. +WORLD +SCIENTIFIC +(EUROPE), +2017. +URL: +https://www.worldscientific.com/doi/abs/10.1142/q0067, +arXiv:https://www.worldscientific.com/doi/pdf/10.1142/q0067, doi:10.1142/q0067. +[AK04] +H. Ammari and H. Kang. Reconstruction of Small Inhomogeneities from Boundary Measurements, +volume 1846 of Lecture Notes in Mathematics. Springer, 2004. +[AKKR18] +B. +J. +Adesokan, +K. +Knudsen, +V. +P. +Krishnan, +and +S. +Roy. +A +fully +non-linear +optimiza- +tion approach to acousto-electric tomography. Inverse Probl., +34(10):16, +2018. Id/No 104004. +doi:10.1088/1361-6420/aad6b1. +[Alb22] +Giovanni S Alberti. Non-zero constraints +in elliptic pde with random +boundary values and +applications +to +hybrid +inverse +problems. +Inverse +Problems, +38(12):124005, +oct +2022. +URL: +https://dx.doi.org/10.1088/1361-6420/ac9924, doi:10.1088/1361-6420/ac9924. +[Ale86] +Giovanni +Alessandrini. +An +identification +problem +for +an +elliptic +equation +in +two +variables. +Ann. Mat. Pura Appl. (4), 145:265–295, +1986. URL: http://dx.doi.org/10.1007/BF01790543, +doi:10.1007/BF01790543. +[Ale88] +Giovanni Alessandrini. Stable determinations of conductivity by boundary measurements. Appl. Anal., +27(1-3):153–172, 1988. doi:10.1080/00036818808839730. +[Ale14] +Giovanni +Alessandrini. +Global +stability +for +a +coupled +physics +inverse +problem. +Inverse +Problems, +30(7):075008, +10, +2014. +URL: +http://dx.doi.org/10.1088/0266-5611/30/7/075008, +doi:10.1088/0266-5611/30/7/075008. +[AN01] +Giovanni Alessandrini and Vincenzo Nesi. Univalent σ-harmonic mappings. Arch. Ration. Mech. Anal., +158(2):155–171, 2001. URL: http://dx.doi.org/10.1007/PL00004242, doi:10.1007/PL00004242. +[AN15] +Giovanni Alessandrini and Vincenzo Nesi. Quantitative estimates on Jacobians for hybrid inverse +problems. Bulletin of the South Ural State University. Series “Mathematical Modelling, Programming +& Computer Software”, 8(3):25–41, 2015. +[Arr99] +S. R. Arridge. Optical tomography in medical imaging. Inverse Problems, 15(2):R41, 1999. URL: +http://stacks.iop.org/0266-5611/15/i=2/a=022. +[Bal13] +Guillaume Bal. Hybrid inverse problems and internal functionals. In Inverse problems and applications: +inside out. II, volume 60 of Math. Sci. Res. Inst. Publ., pages 325–368. Cambridge Univ. Press, +Cambridge, 2013. +[BBMT13] +G. Bal, E. Bonnetier, F. Monard, and F. Triki. Inverse diffusion from knowledge of power densit- +ies. Inverse Probl. Imaging, 7(2):353–375, 2013. URL: http://dx.doi.org/10.3934/ipi.2013.7.353, +doi:10.3934/ipi.2013.7.353. +[BCT22] +Eric Bonnetier, Mourad Choulli, and Faouzi Triki. Stability for quantitative photoacoustic tomography +revisited. Res. Math. Sci., 9(2):30, 2022. Id/No 24. doi:10.1007/s40687-022-00322-6. +[BGM14] +Guillaume +Bal, +Chenxi +Guo, +and +François +Monard. +Inverse +anisotropic +conductiv- +ity +from +internal +current +densities. +Inverse +Problems, +30(2):025001, +21, +2014. +URL: +http://dx.doi.org/10.1088/0266-5611/30/2/025001, doi:10.1088/0266-5611. +[BM58] +Raoul Bott and John Milnor. On the parallelizability of the spheres. Bulletin of the American Math- +ematical Society, 64(3. P1):87–89, 1958. +[BMN01] +P. Bauman, A. Marini, and V. Nesi. Univalent solutions of an elliptic system of partial differ- +ential equations arising in homogenization. Indiana Univ. Math. J., 50(2):747–757, 2001. URL: +http://dx.doi.org/10.1512/iumj.2001.50.1832, doi:10.1512/iumj.2001.50.1832. +[BR11] +Guillaume +Bal +and +Kui +Ren. +Multi-source +quantitative +photoacoustic +tomo- +graphy +in +a +diffusive +regime. +Inverse +Problems, +27(7):075003, +20, +2011. +URL: +http://dx.doi.org/10.1088/0266-5611/27/7/075003, doi:10.1088/0266-5611. +[Bro62] +Felix E. Browder. On approximation by solutions of partial differential equations. Bull. Am. Math. +Soc., 68:36–38, 1962. doi:10.1090/S0002-9904-1962-10691-0. +[BU10] +Guillaume Bal and Gunther Uhlmann. Inverse diffusion theory of photoacoustics. Inverse Problems, +26(8):085010, 2010. URL: http://stacks.iop.org/0266-5611/26/i=8/a=085010. +[BU13] +Guillaume Bal and Gunther Uhlmann. Reconstruction of coefficients in scalar second-order elliptic +equations from knowledge of their solutions. Comm. Pure Appl. Math., 66(10):1629–1652, 2013. +doi:10.1002/cpa.21453. + +JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS +19 +[Cal80] +A.-P. Calderón. On an inverse boundary value problem. In Seminar on Numerical Analysis and its +Applications to Continuum Physics (Rio de Janeiro, 1980), pages 65–73. Soc. Brasil. Mat., Rio de +Janeiro, 1980. +[Cap15] +Yves Capdeboscq. On a counter-example to quantitative Jacobian bounds. J. Éc. polytech. Math., +2:171–178, 2015. URL: http://dx.doi.org/10.5802/jep.21, doi:10.5802/jep.21. +[CFdGK09] Y. Capdeboscq, J. Fehrenbach, F. de Gournay, and O. Kavian. Imaging by modification: numer- +ical reconstruction of local conductivities from corresponding power density measurements. SIAM J. +Imaging Sci., 2(4):1003–1030, 2009. +[Cho21] +Mourad Choulli. Some stability inequalities for hybrid inverse problems. C. R., Math., Acad. Sci. +Paris, 359(10):1251–1265, 2021. doi:10.5802/crmath.262. +[CK98] +D. Colton and R. Kress. Inverse acoustic and electromagnetic scattering theory, volume 93 of Applied +Mathematical Sciences. Springer-Verlag, Berlin, second edition, 1998. +[CLR20] +Mihajlo Cekić, Yi-Hsuan Lin, and Angkana Rüland. The Calderón problem for the fractional +Schrödinger equation with drift. Calc. Var. Partial Differ. Equ., +59(3):46, +2020. Id/No 91. +doi:10.1007/s00526-020-01740-6. +[CT19] +Mourad Choulli and Faouzi Triki. Hölder stability for an inverse medium problem with internal data. +Res. Math. Sci., 6(1):15, 2019. Id/No 9. doi:10.1007/s40687-018-0171-z. +[CV03] +Y. Capdeboscq and M.˜S. Vogelius. A general representation formula for boundary voltage perturb- +ations caused by internal conductivity inhomogeneities of low volume fraction. M2AN Math. Model. +Numer. Anal., 37(1):159–173, 2003. +[DDBR19] +Neda Davoudi, Xose Luis Dean-Ben, and Daniel Razansky. Deep learning optoacoustic tomo- +graphy with sparse data. NATURE MACHINE INTELLIGENCE, 1(10):453–460, +OCT 2019. +doi:10.1038/s42256-019-0095-3. +[Gia93] +M. Giaquinta. Introduction to Regularity Theory for Nonlinear Elliptic systems. Lectures in mathem- +atics. Birkhauser Verlag, 1993. +[GW75] +R. E. Greene and H. Wu. Whitney’s imbedding theorem by solutions of elliptic equations and geometric +consequences. In Differential geometry (Proc. Sympos. Pure Math., Vol. XXVII, Part 2, Stanford +Univ., Stanford, Calif., 1973), pages 287–296. Amer. Math. Soc., Providence, R. I., 1975. +[HMY+04] +K.F. Hasanov, A.W. Ma, R.S. Yoon, A.I. Nachman, and M.L. Joy. A new approach to cur- +rent density impedance imaging. In Engineering in Medicine and Biology Society, 2004. IEMBS +’04. 26th Annual International Conference of the IEEE, volume 1, pages 1321–1324, Sept 2004. +doi:10.1109/IEMBS.2004.1403415. +[Ker58] +Michel A Kervaire. Non-parallelizability of the n-sphere for n> 7. Proceedings of the National Academy +of Sciences, 44(3):280–283, 1958. +[KK11] +P. Kuchment +and L. Kunyansky. +Mathematics +of Photoacoustic +and Thermoacoustic +Tomo- +graphy. +In +Otmar +Scherzer, +editor, +Handbook +of +Mathematical +Methods +in +Imaging, +pages +817–865. Springer New York, +2011. URL: http://dx.doi.org/10.1007/978-0-387-92920-0_19, +doi:10.1007/978-0-387-92920-0_19. +[Kuc12] +Peter Kuchment. Mathematics of hybrid imaging: +a brief review. In The mathematical legacy of +Leon Ehrenpreis, volume 16 of Springer Proc. Math., pages 183–208. Springer, Milan, 2012. URL: +http://dx.doi.org/10.1007/978-88-470-1947-8_12, doi:10.1007/978-88-470-1947-8_12. +[Lax56] +P. D. Lax. A stability theorem for solutions of abstract differential equations, and its application to +the study of the local behavior of solutions of elliptic equations. Comm. Pure Appl. Math., 9:747–766, +1956. +[LJC00] +B +Lavandier, +J +Jossinet, +and +D +Cathignol. +Experimental +measurement +of +the +acousto- +electric +interaction +signal +in +saline +solution. +Ultrasonics, +38(9):929–936, +2000. +URL: +http://dx.doi.org/10.1016/S0041-624X(00)00029-9, doi:10.1016/S0041-624X(00)00029-9. +[Mal56] +Bernard Malgrange. Existence et approximation des solutions des équations aux dérivées partielles et +des équations de convolution. Ann. Inst. Fourier, Grenoble, 6:271–355, 1955–1956. +[Man01] +N. Mandache. Exponential instability in an inverse problem for the Schrödinger equation. Inverse +Problems, 17(5):1435, 2001. URL: http://stacks.iop.org/0266-5611/17/i=5/a=313. +[NTT11] +A. Nachman, A. Tamasan, and A. Timonov. Current density impedance imaging. Tomography and +Inverse Transport Theory. Contemporary Mathematics (G. Bal, D. Finch, P. Kuchment, P. Stefanov, +G. Uhlmann, Editors), 559:035014, 2011. +[NY79] +Shlomo +P. +Neuman +and +Sidney +Yakowitz. +A +statistical +approach +to +the +inverse +prob- +lem +of +aquifer +hydrology: +1. +theory. +Water +Resources +Research, +15(4):845–860, +1979. +URL: +https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/WR015i004p00845, +arXiv:https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/WR015i004p00845, +doi:https://doi.org/10.1029/WR015i004p00845. +[RRN09] +A. Rosenthal, D. Razansky, and V. Ntziachristos. Quantitative optoacoustic signal extraction using +sparse signal representation. Medical Imaging, IEEE Transactions on, 28(12):1997–2006, Dec 2009. +doi:10.1109/TMI.2009.2027116. +[Sch15] +Scherzer, editor. Handbook of Mathematical Methods in Imaging, volume I,II,III. Springer, 2nd edition, +July 2015. + +20 +YVES CAPDEBOSCQ AND TIANRUI DAI +[SJAH91] +G.C. Scott, M.L.G. Joy, R.L. Armstrong, and R.M. Henkelman. Measurement of nonuniform current +density by magnetic resonance. Medical Imaging, IEEE Transactions on, 10(3):362–374, Sep 1991. +doi:10.1109/42.97586. +[SKL+12] +J. K. Seo, D. Kim, J. Lee, O. I. Kwon, S. Z. K. Sajib, and E. J. Woo. Electrical tissue prop- +erty imaging using MRI at dc and Larmor frequency. Inverse Problems, 28(8):084002, 26, 2012. +doi:10.1088/0266-5611/28/8/084002. +[SW11] +Jin Keun Seo and Eung Je Woo. Magnetic resonance electrical impedance tomography (MREIT). +SIAM review, 53(1):40–68, 2011. URL: http://epubs.siam.org/doi/pdf/10.1137/080742932. +[Uhl09] +G. +Uhlmann. +Electrical +impedance +tomography +and +Calderón’s +problem. +Inverse +Problems, +25(12):123011, 2009. URL: http://stacks.iop.org/0266-5611/25/i=12/a=123011. +[Uhl14] +Gunther Uhlmann. 30 years of Calderón’s problem. Sémin. Laurent Schwartz, EDP Appl., 2012- +2013:ex, 2014. doi:10.5802/slsedp.40. +[WA11] +K. Wang and M. A. Anastasio. Photoacoustic and Thermoacoustic Tomography: +Image Form- +ation Principles. In O. Scherzer, editor, Handbook of Mathematical Methods in Imaging, pages +781–815. Springer New York, +2011. URL: http://dx.doi.org/10.1007/978-0-387-92920-0_18, +doi:10.1007/978-0-387-92920-0_18. +[WLM94] +Eung J. Woo, Soo Yeol Lee, and Chi Woong Mun. Impedance tomography using internal current +density distribution measured by nuclear magnetic resonance. Proc. SPIE, 2299:377–385, 1994. URL: +http://dx.doi.org/10.1117/12.179269, doi:10.1117/12.179269. +[Woo91] +J. C. Wood. Lewy’s theorem fails in higher dimensions. Math. Scand., 69(2):166 (1992), 1991. +[WS12] +T. +Widlak +and +O. +Scherzer. +Hybrid +tomography +for +conductivity +imaging. +Inverse +Prob- +lems, +28(8):084008, +28, +2012. +URL: +http://dx.doi.org/10.1088/0266-5611/28/8/084008, +doi:10.1088/0266-5611/28/8/084008. +[ZW04] +Hao Zhang and Lihong V. Wang. Acousto-electric tomography. Proc. SPIE. Photons Plus Ultra- +sound: +Imaging and Sensing, 5320:145–149, 2004. URL: http://dx.doi.org/10.1117/12.532610, +doi:10.1117/12.532610. +Appendix A. Additional Proofs +A.1. Proof of lemma 14. +Proof of lemma 14. Given v ∈ H1 +0 (Ω) ,there holds, using the a priori bounds 2, Cauchy-Schwarz +and completing a square, +⟨Lv, v⟩H−1(Ω)×H1(Ω) += +� +Ω +ADu · Du + (b + c) Du · u + qu2dx +≥ +λ ∥Du∥2 +L2(Ω) − 2λ−1 ∥Du∥L2(Ω) ∥u∥L2(Ω) − λ−1 ∥u∥2 +L2(Ω) +≥ +λ +� +∥Du∥L2(Ω) − λ−2 ∥u∥L2(Ω) +�2 +− +� +λ−1 + λ−3� +∥u∥2 +L2(Ω) . +Thus writing M = λ−1 + λ−3 + 1, for any i1, · · · , iN+1 ∈ {1, · · · , N + 1}N+1, all Dirichlet bound- +ary value problems associated with L [i1, · · · , iN+1] + MId are well-posed in Ω. If the Dirichlet +boundary value problem associated with Li := L [i1, · · · , iN+1] is not well-posed, there exists a +non-zero solution of +� +Liu = 0 +in Ω +u = 0 +on ∂Ω +Consider (Li + MId)−1 as a linear operator from L2 (Ω) to L2 (Ω) ∩ H1 +0 (Ω). The ill-posedness +of Li implies that M −1 ∈ σ +� +(Li + MId)−1� +. +Thanks to the Rellich–Kondrachov embedding, +(Li + MId)−1 : L2 (Ω) → L2 (Ω) is a compact linear operator acting on L2 (Ω), therefore M −1 +is an isolated eigenvalue. That is, there exists ℵ1 +[i1,··· ,iN+1] > 0 such that B +� +M −1, ℵ1 +[i1,··· ,iN+1] +� +\ +{M −1} ⊂ ρ +� +(Li + MId)−1� +. +When the Dirichlet boundary value problem is well-posed, M −1 ∈ ρ +� +(Li + MId)−1� +. The +resolvent is open, thus there exists some ℵ2 +[i1,··· ,iN+1] > 0 such that B +� +M −1, ℵ2 +[i1,··· ,iN+1] +� +⊂ +ρ +� +(Li + MId)−1� +. + +JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS +21 +Now define +ℵ = +min +i1,··· ,iN+1∈{1,··· ,N+1} +� +ℵ1 +[i1,··· ,iN+1], ℵ2 +[i1,··· ,iN+1] +� +, and ϑ = +ℵM 2 +1 + ℵM . +We verify that for every κ ∈ (0, ϑ) M −1 ̸∈ σ +� +(Li + κ + M)−1� +, which in turn means that Li + κ +is well posed. +□ +A.2. Proof of lemma 22. +Fact. There exists some η0 > 0, such that for any 0 < η < η0, and any t ∈ +� 1 +2, 1 +� +there holds +(A.1) +∀u ∈ H1 +0 +� +U t� +, ⟨Lku, u⟩H−1(Ut),H1 +0 (Ut) ≥ 1 +3λ ∥u∥2 +H1 +0 (Ut) . +Proof. Indeed, we have, for any t > 0, +⟨Lku, u⟩ += +� +Ut A∇u · ∇u + u (b + c) · ∇u + qu2dx +≥ +λ ∥∇u∥2 +L2(Ut) − 2λ−1 +� +Ut |∇u| |u| dx − λ−1 ∥u∥2 +L2(Ut) +≥ +λ +2 ∥∇u∥2 +L2(Ut) − λ2 + 2 +λ3 +∥u∥2 +L2(Ut) . +(A.2) +To address the lower order term we rely on lemma 22. +Since U t = ψ−1 +k +� +B 1 +t \ B 1 +t (1−η) +� +, by +changing variables, lemma 22 shows that for any u ∈ H1 +0 (U t) there holds +(A.3) +∥u∥2 +L2(Ut) ≤ C η2 +t2 ∥∇u∥2 +L2(Ut) ≤ 4Cη2 ∥∇u∥2 +L2(Ut) . +Combining equation (A.2) and equation (A.3), we have +⟨Lu, u⟩H−1(Ut)×H1 +0 (Ut) ≥ +�λ +2 − 4C λ2 + 2 +λ3 +η2 +� +∥∇u∥2 +L2(Ut) . +Choosing η > 0 small enough there holds for all t ∈ +� 1 +2, 1 +� +, +⟨Lu, u⟩H−1(Ut)×H1(Ut) ≥ 1 +3λ ∥∇u∥2 +L2(Ut) . +□ +Lemma 22. Write Br for the ball centred at the origin of radius r. Given 0 < r2 < r1 , for any +s and t such that r1 < t < s < r2, there holds +∀u ∈ H1 +0 (Bs \ Bt) ∥u∥2 +L2(Bs\Bt) ≤ c (s − t)2 ∥∇u∥2 +L2(Bs\Bt) . +for some constant c, which depends on r1 and r2 only. +Proof. Consider the Dirichlet eigenvalue problem in Bs \ Bt + + + + + +△u = ρstu +in Bs \ Bt +u = 0 +on ∂Bs +u = 0 +on ∂Bt +We note that the first eigensolution is radial, u = fst (|r|) , and fst ∈ C∞ ((t, s)) satisfies +1 +rd−1 ∂r +� +rd−1∂rfst +� += ρ1 +stf in (t, s) +fst (s) = fst (t) = 0. + +22 +YVES CAPDEBOSCQ AND TIANRUI DAI +By the change of variable r → r2−r1 +s−t (r − t) + r1, we find that fst (r) = fr2r1 +� +r1−r2 +s−t (r − t) + r2 +� +, +and ρ1 +st = +� +r2−r1 +s−t +�2 +ρ1 +r2r1. +ρ1 +st = +inf +u∈H1 +0 (Bs\Bt) +u̸=0 +� +Bs\Bt ∇u · ∇udx +� +Bs\Bt u2dx += +inf +u∈H1 +0 (Bs\Bt) +u̸=0 +∥∇u∥2 +L2(Bs\Bt) +∥u∥2 +L2(Bs\Bt) +, +We conclude that ∥u∥2 +L2(Bs\Bt) ≤ +� +ρ1 +r1r2 +�−1 � +s−t +r1−r2 +�2 +∥∇u∥2 +L2(Bs\Bt) for every u ∈ H1 +0 (Bs \ Bt). +□ +A.3. Proof of lemma 10. +Proof. We have + + + +Jf (u1, F) +... +Jf (uP, F) + + + = J (u1, · · · , uP )T T (x, Ed+1,df1, · · · , Ed+1,dfd⋆, ed+1) . +Thanks to proposition 8 there holds rank (f1, · · · , fd⋆) = d. Furthermore +(Ed+1,df1, · · · , Ed+1,dfd⋆) ∩ Red+1 = {0} , +thus proposition 9 shows that rank (T (x, Ed+1,df1, · · · , Ed+1,dfd⋆, ed+1)) = d + 1. +Since {u1, · · · , uP } ∈ A (P), we have rank J (u1, · · · , uP)T = d + 1 at every x, thus rank Fx = +d + 1. +Note that ∀a ∈ RP −1 rank Pa = P − 1 thus for every x, we have: +rank Pa ◦ Fx ≤ min (rank Pa, rankFx) ≤ d + 1 +and +rank Pa ◦ Fx ≥ rank Pa + rank Fx − P = d +If a ∈ RP −1 \ G, then there exists x ∈ Ω, such that +rank Pa ◦ Fx = d +⇐⇒ +dim ker(Pa ◦ Fx) = d⋆ + 1 − d +(A.4) +⇐⇒ +dim F −1 +x +(span {(a1, · · · , aP −1, 1)}) = d⋆ + 1 − d +We have the implication a ∈ RP −1 \ G =⇒(a1, · · · , aP −1, 1) ∈ ∪xIm (Fx). +Conversely, if +(a1, · · · , aP −1, 1) ∈ ∪xIm (Fx) then there exists x ∈ Ω \ ∪i,jΓij and va ∈ Rd⋆+1 such that Fxva = +(a1, · · · , aP −1, 1). Thus, +Rva ⊕ ker (Fx) ⊂ F −1 +x +(span {(a1, · · · , aP −1, 1)}) . +Even though the choice of va is arbitrary, any other choice would be in Rva ⊕ ker (Fx), thus Rva ⊕ +ker (Fx) = F −1 +x +(span {(a1, · · · , aP −1, 1)}). Note that since rank (Fx) = d + 1, dim (ker (Fx)) = +d⋆ − d, therefore dim (Rva ⊕ ker (Fx)) = d⋆ + 1 − d, which from equation (A.4) implies that +rank Pa ◦ Fx = d. +In conclusion, we have Pa ◦ Fx has rank d + 1⇐⇒ (a1, . . . aP −1, 1) ̸∈ ∪xIm (Fx). Set B = +∪xIm (Fx) ∩ +� +b ∈ RP |bP = 1 +� +. The identity RP −1 \ G = PP −1,P (B) therefore holds. We now +follow the argument in [AC22, Lemma 4.1] and [GW75] and deduce that Hk−1 (B) = 0. The +conclusion is attained as the P − 1-Hausdorff measure equals to the P − 1-Lebesgue measure. +□ +Appendix B. Proof of lemma 20 +When d ̸∈ {2, 4, 8} it is impossible to find a group of continuous vector fields family {h1, . . . , hd} +such that for every x ∈ ∂B1, there holds +(1) h1 (x) = x , +(2) ⟨hi, hj⟩ (x) = δij for i, j = 1, . . . d. + +JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS +23 +In odd dimensions, this is a consequence, of the so-called Hairy ball theorem. In general, the +following result is proved in [Ker58] and [BM58]. +Theorem. There exists trivial bundle of Sd−1 if and only if d = 2, 4 or 8. +Moreover, when +d ∈ {2, 4, 8} there exists {h2, · · · , hd} ∈ +� +C1 � +Sd−1; Rd��d−1 such that (h1, · · · , hd) ∈ SOd +� +Sd−1� +where SOd refers to the real unitary matrices with positive determinant. +Explicit examples are : +(1) When d = 2, ∀ (x1, x2) ∈ ∂B1, set h1 = (x1, x2) and h2 = (−x2, x1). +(2) When d = 4, ∀ (x1, x2, x3, x4) ∈ ∂B1, set +h1 += +(x1, x2, x3, x4) , +h2 += +(−x2, x1, −x4, x3) , +h3 += +(x3, −x4, −x1, x2) , +h4 += +(x4, x3, −x2, −x1) . +(3) When d = 8, ∀ (x1, x2, x3, x4, x5, x6, x7, x8) ∈ ∂B1 set +h1 += +(x1, x2, x3, x4, x5, x6, x7, x8) , +h2 += +(−x2, x1, −x4, x3, −x6, x5, x8, −x7) , +h3 += +(−x3, x4, x1, −x2, −x7, −x8, x5, x6) , +h4 += +(−x4, −x3, x2, x1, −x8, x7, −x6, x5) , +h5 += +(−x5, x6, x7, x8, x1, −x2, −x3, −x4) , +h6 += +(−x6, −x5, x8, −x7, x2, x1, x4, −x3) , +h7 += +(−x7, −x8, −x5, x6, x3, −x4, x1, x2) , +h8 += +(−x8, x7, −x6, −x5, x4, x3, −x2, x1) . +The second part of lemma 20 follows from the following proposition. +Proposition. There exists h2, · · · , hd+1 in +� +C1 � +Sd−1, Rd��d such that ⟨hi, x⟩ = 0, for i = +2, · · · , d + 1 and rank (x, h2, · · · , hd+1) = d on Sd−1. +Proof. For every x ∈ Sd−1 ⊂ Rd, we denote x = (x1, x2, · · · , xd). Set +hi = (x1xd+2−i − δ1,d+2−i, · · · , xdxd+2−i − δd,d+2−i) , +where δi,j is the Kronecker symbol. We have ⟨hi, x⟩ = +��d +j=1 x2 +jxd+2−i +� +− xd+2−i = 0, for i ≥ 2, +thus each hi is tangent to Sd−1. Take +(B.1) Hd = + + + + + +h1 +1 +h2 +xd +... +... +hd+1 +x1 + + + + + +(d+1)×(d+1) +, that is, Hd = + + + + + + + +x1 +x2 +. . . +xd +1 +x1xd +x2xd +. . . +x2 +d − 1 +xd +... +... +... +... +... +x1x2 +x2 +2 − 1 +. . . +xdx2 +x2 +x2 +1 − 1 +x1x2 +. . . +xdx1 +x1 + + + + + + + +. +There holds rank Hd = d + 1, for d ≥ 2. The proof is by induction. When d = 2, we compute +det H2 = −1. When d ≥ 3, we have +det Hd = +����������� +x1 +x2 +. . . +xd +1 +0 +0 +. . . +−1 +0 +... +... +... +... +... +x1x2 +x2 +2 − 1 +. . . +xdx2 +x2 +x2 +1 − 1 +x1x2 +. . . +xdx1 +x1 +����������� += (−1)d+1 det Hd−1 = . . . = (−1) +d(d+3) +2 +. +Thus, we have rank Hd = d + 1, which implies rank (h1, · · · , hd+1) = d. +We modify hd+1 → +hd+1 (−1) +d(d+3) +2 +and modify the last line of Hd to be +� +hd+1 (−1) +d(d+3) +2 +, (−1) +d(d+3) +2 +x1 +� +to obtain +Hd ∈ SOd+1. +□ + +24 +YVES CAPDEBOSCQ AND TIANRUI DAI +Appendix C. Proof of theorem 6 +Proof. We reproduce the proof given in [AC22] with the necessary adaptations for the reader’s +convenience. +Thanks to theorem 7, and in turn equation (2.7), there exists a large P0 such +that A (P0) ̸= ∅. Write P ⋆ = +� +d+d⋆+1 +α +� +. Take h ∈ H (Ω)P ⋆ +, namely h = (h1, · · · , hP ⋆). Take +u1, · · · , uP0 ∈ A (P0) . Then (h1, · · · , hP ⋆, u1, · · · , uP0) ∈ A (P0 + P ⋆), and for x ∈ Ω \ ∪i̸=jΓij, +rank J (h1, · · · , hP ⋆, u1, · · · , uP0) (x) = d + 1. +Thanks to lemma 11, for a.e aP0+P ⋆−1 ∈ RP0+P ⋆−1 , there holds +rank J +� +h1 − aP0+P ⋆−1 +1 +uP0, · · · , hP ⋆ − aP0+P ⋆−1 +P ⋆ +uP0, · · · , uP0−1 − aP0+P ⋆−1 +P0+P ⋆−1uP0 +� +(x) = d + 1. +Repeating this reduction P0 times, for a.e aT = +� +aT +1 , · · · , aT +T +� +∈ RT , where T = (P ⋆, · · · , P0 + P ⋆ − 1), +there holds +rank J +� +h1 − +P0+P ⋆−1 +� +T =P ⋆ +aT +1 uT −P ⋆+1, · · · , hP ⋆ − +P0+P ⋆−1 +� +T =P ⋆ +aT +P ⋆uT −P ⋆+1 +� +(x) = d + 1, +which means haT = +� +h1 − �P0+P ⋆−1 +T =P ⋆ +aT +1 uT −P ⋆+1, · · · , hP ⋆ − �P0+P ⋆−1 +T =P ⋆ +aT +P ⋆uT −P ⋆+1 +� +∈ A (P ⋆). +For any ε > 0, taking aT small enough, since u1, · · · , uP0 are bounded in H (Ω), we conclude that +∥h − haT ∥H(Ω)P ⋆ ≤ ε. +We then prove that A (P ⋆) is an open set. For any x ∈ Ω, u = (u1, · · · , uP ⋆) ∈ H (Ω)P ⋆ +, we define +Det : Ω × H (Ω)P ⋆ +→ R given by +Det (x, u) := +P ⋆ +� +i1,··· ,id+1=1 +det +��� +Jf +� +ui1, · · · , uid+1 +�� +(x) +�� +Thanks to the continuity and the multi-linearity of Jf, Det (x, u) is continuous for every x ∈ +Ω, u = (u1, · · · , uP ⋆) ∈ H (Ω)P ⋆ +. Take u ∈ A (P ⋆), for every x ∈ Ω, there holds +Det (x, u) > 0 +Therefore, there exists some constant C > 0 such that +infx∈ΩDet (x, u) ≥ C > 0 +Take ε > 0 small enough and v = (v1, · · · , vP ⋆) ∈ H (Ω)P ⋆ such that +∥u − v∥H(Ω)P ⋆ = +P ⋆ +� +i=1 +∥ui − vi∥H(Ω) ≤ ε and Det (x, v) ≥ C +2 > 0, +which implies rank (Jf (v1, · · · , vP ⋆)) = d + 1. +Thanks to the relation between J and Jf, we +conclude that rank (J (v1, · · · , vP ⋆)) = d + 1 which implies v ∈ A (P ⋆). +□ +Université Paris Cité, CNRS, Sorbonne Université, Laboratoire Jacques-Louis Lions (LJLL), F- +75006 Paris, France +Email address: yves.capdeboscq@u-paris.fr +Email address: tianrui.dai@etu.u-paris.fr + diff --git a/Z9AzT4oBgHgl3EQfm_3D/content/tmp_files/load_file.txt b/Z9AzT4oBgHgl3EQfm_3D/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..9ea3e15bb0f1e2203547de06455d0355576559eb --- /dev/null +++ b/Z9AzT4oBgHgl3EQfm_3D/content/tmp_files/load_file.txt @@ -0,0 +1,1322 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf,len=1321 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='01574v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='AP] 4 Jan 2023 POSITIVE JACOBIAN CONSTRAINTS FOR ELLIPTIC BOUNDARY VALUE PROBLEMS WITH PIECEWISE-REGULAR COEFFICIENTS ARISING FROM MULTI-WAVE INVERSE PROBLEMS YVES CAPDEBOSCQ AND TIANRUI DAI Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Multi-wave inverse problems are indirect imaging methods using the interaction of two different imaging modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' One brings spatial accuracy, and the other contrast sensitivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The inversion method typically involve two steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The first step is devoted to accessing internal datum of quantities related to the unknown parameters being observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The second step involves recovering the parameters themselves from the internal data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' To perform that inversion, a typical requirement is that the Jacobian of fields involved does not vanish.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' A number of authors have considered this problem in the past two decades, and a variety of methods have been developed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Existing techniques require Hölder continuity of the parameters to be reconstructed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In practical applications, the medium may present embedded elements, with distinct physical properties, leading to discontinuous coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In this article we explain how a Jacobian constraint can imposed in the piecewise regular case, when the physical model is a divergence form second order linear elliptic boundary value problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Introduction Parameter reconstruction problems for elliptic boundary value problems are indirect recon- struction problems with, at best, logarithmic stability [Ale88, Man01].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' While these type of measurements are desirable as they are non intrusive, and typically require low cost apparels, such weak stability implies that only low-resolution reconstruction can be achieved in prac- tice [Sch15, WA11, WS12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The Calderón problem for electrical impedance tomography (EIT) [Cal80, Uhl09, Uhl14], the inverse scattering problems [CK98] and optical tomography [Arr99] are the main examples of such problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The stability of such methods dramatically improve when, instead of making absolute measurements ex nihilo, they are used to estimate perturbations of a know medium [AK04, CV03].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' On the other hand, fast wave imaging modalities, such as ultrasound tomography or MRI, preserve singularities and achieve excellent spatial accuracy, at the cost of a loss of quantitative information with respect to the amplitude of the parameters involved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Over the past two decades, coupled-physics, or multi-wave, or hybrid inverse problems (the final commonly accepted name is yet to be determined) have emerged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' These imaging modalities aim to benefit from the advantages of both approaches: one for accurate contrast estimations, and the other for high resolution [AC18, AGK+17, Bal13, Kuc12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The most developed hybrid modality is photo-acoustic tomography (PAT) [DDBR19, KK11, RRN09, WA11], in which light and ultrasounds are combined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Many other modalities have been considered, all combining a diffusive process with a much less diffusive one [AKKR18, ABC+08, ACdG+11, HMY+04, KK11, LJC00, SKL+12, SW11, WA11, ZW04].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The parameter reconstruction method in all these problems start with a data collection step, where some internal data is reconstructed, involving both the parameter of interest and the solution of the PDE involving this parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In PAT, the internal data is µ(x)u(x) , where µ is the optical absorption and u is the light intensity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In Current Density Impedance Imaging (CDII), the internal data is |γ(x)∇u(x)|, where γ is the conductivity and u is the electric field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The second step involves 2000 Mathematics Subject Classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [2010]{35J25, 35B38, 35R30}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Key words and phrases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Hybrid inverse problems, coupled-physics imaging, photo-acoustic tomography, non- zero constraints, Runge approximation, elliptic equations, Whitney reduction, unique continuation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' This study contributes to the IdEx Université de Paris ANR-18-IDEX-0001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' 1 2 YVES CAPDEBOSCQ AND TIANRUI DAI extracting the parameter from this data, (µ(x) in PAT, γ in CDII).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The mathematical problem considered in this article is related to this second step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Example (A Jacobian constraint example).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Consider the problem of reconstructing γ, a scalar function, in − div(γDu) = 0 in Ω, from the knowledge of the potential u in Ω, as in [Ale86].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' It appears in a variety of contexts, such has Hydrology [NY79], CDII [BGM14, NTT11, SJAH91, WLM94] and Acousto–Electric Tomography [AKKR18, ABC+08, CFdGK09].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' If γ is regular, we have (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1) D(lnγ) · Du = −∆u in Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Suppose given d measurements u1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' , ud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' By (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1) we obtain (D(lnγ))T �Du1, · · , Dud � = − �∆u1, · · , ∆ud � in Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' If det � Du1, · · , Dud � > 0 holds true, then ∇(lnγ), and in turn γ up to a multiplicative constant, are explicitly readable from the data by inverting the matrix �Du1, · · , Dud � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' More generally, given N ≥ d measurements, the least-square optimisation problem associated to the (possibly overdetermined) system of equations (D(lnγ))T � Du1, · · , DuN � = − � ∆u1, · · , ∆uN � in Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' has a unique minimiser when det � Duii, · · , Duid � > 0 for some (i1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' , id) ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' , N}d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Using unique continuation methods, it is possible to address the parameter reconstruction problem without imposing Jacobian constraints [Ale14, ADCFV17, BCT22, Cho21, CT19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' On the other hand, when non-vanishing constraints are satisfied, the stability estimates are optimal (of Lipschitz type) and often lead to explicit reconstruction formulae [AC18, Bal13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The focus of this paper is non-vanishing Jacobian constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In two dimensions, for the con- ductivity equation, a generalisation of the Radó–Kneser–Choquet theorem [Ale86, AN01, AN15, BMN01] shows that imposing a non-vanishing Jacobian constraint globally, and independently of the conductivity is possible : in practice, it suffices to verify that the Jacobian doesn’t vanish when the conductivity is equal to one everywhere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Such an approach cannot be extended to three dimensions, or more general elliptic problems [Cap15, Woo91].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Suitable solutions can be construc- ted using complex geometrical optics solutions (CGOS) [Bal13, BBMT13, BR11, BU10, BU13], but this construction depends on the unknown coefficients, which must be smooth and isotropic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Another approach [AC18, BU13] is based on the Runge approximation [Lax56, Mal56].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' It is valid for all PDE for which a Unique Continuation Property holds, it allows for anisotropic coefficients, and the smoothness assumptions are precisely that for the Unique Continuation Property of the underlying equation, namely Lipschitz regularity or Hölder continuity depending on the equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' By combining this approach with the Whitney projection method, it is proved in [AC22] that the set of suitable solutions is open and dense, with explicit estimates on the number of solutions needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' A very related result, using a slightly different Whitney projection argument, was proved independently around the same time [CLR20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Very recently, another approach was proposed, which showed that choosing random boundary values was possible [Alb22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' All these methods rely on some regularity of the coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In practical cases, it is desirable to consider the case of piecewise regular coefficients, each region corresponding to a different strata in geology, or a different organ in medical imaging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In this work, we show how the approach introduced in [AC22] can be extended to the case of piecewise regular coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We use existing unique continuation results within the regular parts of the domain, [Bro62, Lax56, Mal56], and introduce adequate quantities to cross over discontinuities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' These constructions may prove useful for other models where the principal part is in divergence form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In section §2 we detail our assumptions, state the main result of this article, and explain its proof, using intermediate results proved in the subsequent sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In [AC22], Hölder continuity is crucially used in two instances : to show existence of solutions satisfying the adequate constraints via Runge Approximation, and to use the Whitney projection method, which is based on Sard’s JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS 3 lemma, which itself uses Hölder continuity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' As a result, our developments come in two parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In section §3 we establish the existence a finite number of solutions such that the non vanishing Jacobian constraint is satisfied in the whole domain : this requires adapting existing unique continuation results to cross smooth interfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In section §4 we use the continuity of fluxes across interfaces resulting from the divergence structure of the principal part, via appropriate charting, to deduce non-vanishing properties of gradients up to the internal subregion boundaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Model, Assumptions and Main Results 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Problem definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The ambient space is Rd, with d ≥ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Assume that Ω is an open, bounded and connected domain in Rd with a C2 boundary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Assume that Ω contains N ≥ 1 open connected disjoint sets Ω1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' , ΩN with C2 boundaries such that 0 < d � ∪N ℓ=1Ωi, Rd \\ Ω � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Assume furthermore that for any i ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' , N} , Ωi has a C2 � Rd−1� boundary, and each connected component of its boundary is in common with at most one other Ωj, j ̸= i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We write ΩN+1 = Ω \\ � ∪N i=1Ωi � , and denote Γij = ∂Ωi ∩ ∂Ωj when this set is non-empty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Additionally, assume that each Γij is sphere-like, that is, there exists an open neighbourhood Uij of Γij, an open neighbourhood Vij of Sd−1, and a C2 diffeomorphism ψij : Uij → Vij such that ψij (Γij) = Sd−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Moreover, there exists d0 > 0 such that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1) ∀i, j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' , N + 1}2 , i ̸= j, if ∂Ωi \\ Γij ̸= ∅ then d (Γij, ∂Ωi \\ Γij) > d0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' An example of such a configuration is given in figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Following the usual notation, given a set U we write 1U : x → � 1 if x ∈ U, 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Assumption 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Given α ∈ (0, 1], for each i ∈ {1, · · · , N + 1} let Ai ∈ C0,α � Rd;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Ms d (R) � be a symmetric-matrix-valued function which is uniformly elliptic, that is, there exists λ > 0 such that for all x ∈ Ω, and all ζ ∈ Rd, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2) λ |ζ|2 < A (x) ζ · ζ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For each i ∈ {1, · · · , N + 1}, let bi ∈ C0,α � Rd;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Rd� , ci ∈ C0,α � Rd;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Rd� and qi ∈ C0,α � Rd;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' R � , be such that max � ∥Ai∥ C0,α(Rd;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='Rd×d), ∥bi∥ C0,α(Rd;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='Rd) , ∥ci∥ C0,α(Rd;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='Rd) , ∥qi∥ C0,α(Rd;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='R) � ≤ λ−1, where for n ≥ 1, ∥f∥ C0,α(Rd;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='Rn) = sup Rd |f| + sup x̸=y∈Rd 0̸=ζ∈Rn |f (x) · ζ − f (y) · ζ| |x − y| |ζ| .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Finally, when d ≥ 3, we assume additionally that Ai ∈ C0,1 � Rd;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Ms d (R) �1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We write, for all x ∈ Ω \\ ∪i,jΓij, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='3) A = N+1 � i=1 Ai1Ωi, b = N+1 � i=1 bi1Ωi, c = N+1 � i=1 ci1Ωi, and q = N+1 � i=1 qi1Ωi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We consider a second order elliptic operator of the form L : u → − div (ADu + bu)+c·Du+qu, and the PDE under consideration is (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='4) Lu = 0 in Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' 1So that the Unique Continuation Property holds in each subdomain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' This assumption can be relaxed when � Akℓ i (x) � 1≤k,ℓ≤d = (ai(x)δkℓ)1≤k,ℓ≤d for all x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' 4 YVES CAPDEBOSCQ AND TIANRUI DAI Thanks to assumption 2 the weak solutions of equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='4) enjoy additional regularity within each subdomain Ωi, i = 1, · · · , N + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' lemma 3 follows from classical regularity results, see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Gia93, Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='19 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='20] for a modern exposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' If u ∈ H1 (Ω) is a weak solution of equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='4), such that there exists g ∈ C1,α � Ω � such that u − g ∈ H1 0 (Ω) and for any v ∈ H1 0 (Ω) there holds � Ω ADu · Dv dx + � Ω ub · Dv dx + � Ω vc · Du dx + � Ω quv dx = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Then, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='5) u ∈ H1 (Ω) ∩ � ∪N+1 i=1 C1,α (Ωi) � =: H (Ω) , and N+1 � i=1 � ∥Du∥C0,α(Ωi) + ∥u∥C0,α(Ωi) � ≤ C � ∥u∥L2(Ω) + ∥g∥C1,α(Ω) � , where the constant C depends on λ given in assumption 2 and Ωi, i = {1, · · · , N + 1} only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We are now in position to define the quantity of interest in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Definition 4 (Non-vanishing Jacobian solutions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Given P ≥ d + 1, we call {ux0 i }P i=1 ∈ H (Ω)P (a group of) non-vanishing Jacobian solutions of equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='4) at x0 ∈ Ω \\ ∪i,jΓij, if (1) for i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' , P there holds Lux0 i = 0 in Ω, (2) The solutions {ux0 i }P i=1 ∈ H (Ω)P satisfy rank (J (ux0 1 , · · · , ux0 P ) (x0)) = d + 1, where J (u1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' , uP ) (x) := \uf8eb \uf8ec \uf8ed Du1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' DuP u1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' uP \uf8f6 \uf8f7 \uf8f8 (x) = \uf8eb \uf8ec \uf8ed ∂1u1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' ∂du1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' ∂1uP .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' ∂duP u1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' uP \uf8f6 \uf8f7 \uf8f8 (x) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to lemma 3, pointwise values of J (ux0 1 , · · · , ux0 P ) are well defined at any x ∈ Ω \\ ∪i,jΓij.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The use of the word ‘Jacobian’ for the quantity J may seem abusive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Indeed one would expect a Jacobian to be \uf8eb \uf8ec \uf8ed Dv1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Dvd \uf8f6 \uf8f7 \uf8f8 = \uf8eb \uf8ec \uf8ed ∂1v1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' ∂dv1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' ∂1vd .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' ∂dvd \uf8f6 \uf8f7 \uf8f8 , for some function v1, · · · , vd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' It turns out that the slightly generalised Jacobian we consider is a natural quantity to consider in this problem, to take into account the behaviour of solution across interfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' On the other hand, from a family of non-vanishing Jacobian solutions, one can extract a subfamily � ux0 i1 , · · · , ux0 id � such that det � Dux0 i1 , · · · , Dux0 id � (x0) ̸= 0, so it encompasses non-vanishing Jacobian constraints for the traditional definition of a Jacobian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Following the strategy introduced in [AC22] we define the admissible set for an integer P A(P) := � (u1, u2, · · · , uP ) ∈ H(Ω)P : ∀x ∈ Ω \\ ∪i,jΓij, (u1, u2, · · · , uP ) are non vanishing Jacobian solutions} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For a geometrical reason that will be discussed later, we introduce the notation d⋆ = � d when d = 2, 4, 8 d + 1 otherwise when d ≥ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS 5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Main result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The main result of this article is the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Under assumption 1 and assumption 2, when P ≥ � d+d⋆+1 α � , A (P) is an open and dense subset of H (Ω)P , where H(Ω) is defined in equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Remark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' This theorem is an extension to the piecewise regular context [AC22, Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In terms of the result itself, the number P obtained in [AC22] is � 2d α � , thus our result requires a slightly larger number than the regular case;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' however the number of subdomain where the coefficients are regular does not play a role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' A careful reader comparing [AC22, Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='3] and theorem 6 might notice that our result applies to the whole domain, instead of a compact subset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Another simplification is that we need not assume that the Dirichlet (or Neuman or Robin) boundary value problem associated to (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='4) is well posed for our result to hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The proof is done in several steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Theorem 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For any σ > 0 there exists ε > 0 such that for any x ∈ Ω \\ ∪i,jΓij, there exists d + 1 solutions denoted as ux 1, ux 2, · · · , ux d+1 such that ux i ∈ H1 (Ω) and Lux i = 0 in Ω for i ∈ {1, 2, · · · , d + 1}, and there holds (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='6) det J � ux 1, ux 2, · · · , ux d+1 � (y) = det \uf8ee \uf8ef\uf8f0 ∂1ux 1 · · ∂dux 1 ux 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' ∂1ux d+1 · · ∂dux d+1 ux d+1 \uf8f9 \uf8fa\uf8fb (y) > σ for any y ∈ B (x, ε) ∩ Ωj, j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' , N + 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' This result is proved in section §3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' It does not follow directly from classical unique continuation arguments, because of the discontinuous nature of the coefficients of equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Choose σ = 1, and let ε be the corresponding ball radius.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We may extract a finite cover of Ω from ∪x∈Ω\\∪i,jΓijB (x, ε), of cardinality smaller than, say, � ε−1diam (Ω) �d + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' As a result, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='7) A ���diam (Ω) ε �d� + 1 � ̸= ∅.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' To reduce the cardinality of the required group of non-vanishing Jacobian solutions, and to prove the density property we announced, we use a Whitney reduction lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' This strategy was used in [AC22], based on a method introduced in [GW75], and used the Hölder continuity of the Jacobian map J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In our setting, J may be discontinuous across interfaces Γij.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' On the other hand, because of the divergence form of the principal part of the elliptic operator L, a mixed-type (for lack of a better word) Jacobian map of the form (A∇u · h1 + b · h1u, ∇u · h2, · · · , ∇u · hd, u) , with appropriately chosen (h1, · · · , hd) ∈ C0,1 � Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Rd×d� is continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proposition 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' There exists a family of vector-valued functions F = f1, · · · , fd⋆ ∈ C0,1 � Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Rd�d⋆ , such that (1) For every x ∈ Ω, there holds rank (f1, · · · , fd⋆) (x) = d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' (2) On each Γij,|f1| = 1, f1 is normal to Γij, and f1 · fj = 0 for any j ≥ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' (3) For any u ∈ H (Ω) weak solution of equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='4), the map (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='8) Jf (u, F) := ((ADu + bu) · f1, Du · f2, · · · , Du · fd⋆, u) satisfies Jf (u, F) ∈ C0,α � Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Rd⋆+1� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' This proposition is proved in section §4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' 6 YVES CAPDEBOSCQ AND TIANRUI DAI Remark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The vector f1 can be thought of as the extension of the normal vector and f2, · · · , fd⋆ as the tangent vectors on each boundary Γij.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Indeed, since A and b are only piecewise regular, only the normal flux is continuous (and, in turn, Hölder continuous) across interfaces between any Ωi and Ωj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' This forces f2, · · · , fd⋆ to be tangent to the interface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' A topological difficulty arises in all dimensions, except 2, 4 and 8, which requires the introduction of an extra element to obtain a full rank family of Lipschitz continuous tangent vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' This classical result [BM58, Ker58] is discussed further in section §B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' To untangle the dependence of Jf on u and F, we reformulate Jf as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proposition 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We note Pd,d+1 ∈ Rd×(d+1) the projection from Rd+1 to Rd given by Pd,d+1 is such that (Pd,d+1)ij = δij.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We note Ed+1,d the extension from Rd to Rd+1 given by Ed+1,d is such that (Ed+1,d)ij = δij.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Set T : (Ω \\ ∪ijΓij) × Rd+1 → L � R(d+1)×(d⋆+1)� (x, ζ1, · · · , ζd⋆+1) → � AT (x)Pd,d+1ζ1 Pd,d+1ζ2 · · Pd,d+1ζd⋆ Pd,d+1ζd⋆+1 b(x)Pd,d+1ζ1 0 · · 0 1 � For any x ∈ Ω \\ ∪ijΓij, and for any (ξ1, · · · , ξd⋆) ∈ � Rd�d⋆ there holds (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='9) rank (T (x, Ed+1,dξ1, · · · , Ed+1,dξd⋆, ed+1)) = rank (ξ1, · · · , ξd⋆) + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Furthermore, we have Jf (u, F) = (∂1u, · · · , ∂du, u) T (x, Ed+1,df1, · · · , Ed+1,dfd⋆, ed+1) , where Jf is given by equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The last column of T (x, Ed+1,dξ1, · · · , Ed+1,dξd⋆, ed+1) is ed+1 ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Together with the fact that Pd,d+1Ed+1,d = Id, the identity matrix in Rd, the first d⋆ columns are � AT (x)ξ1 ξ2 · · ξd⋆ b(x)ξ1 0 · · 0 � Thanks to the uniform ellipticity of A, AT ξ1 · ξ1 > λ |ξ1|2 and equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='9) follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The identity involving Jf is straightforward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' □ The Whitney reduction argument is as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Lemma 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Given P ∈ N large enough so that A (P) ̸= ∅, define F : Ω \\ ∪i,jΓij × Rd⋆+1 → RP (x, ζ) → Fxζ (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='10) where Fxζ := \uf8ee \uf8ef\uf8f0 (∂1u1, · · · , ∂du1, u1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' (∂1uP , · · · , ∂duP , uP ) \uf8f9 \uf8fa\uf8fb T (x, Ed+1,df1, · · · , Ed+1,dfd⋆, ed+1) ζ, with {u1, · · · , uP } ∈ A (P) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Then Fx has rank d + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For P > d+d⋆+1 α , and a ∈ RP −1, let Pa be the map from RP to RP −1 defined by Pa(y) = (y1 − a1yP , · · · , yP −1 − aP −1yP ) for y = (y1, y2, · · · , yP ) ∈ RP .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Let G = � a ∈ RP −1|Pa ◦ Fx has rank d + 1 � , then |RP −1 − G|Lebesgue = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The proof of this lemma is given in section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We then translate this reduction result for Jf into its counterpart for our original target map J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS 7 Lemma 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Given any P > d+d⋆+1 α , and any {u1, · · · , uP} ∈ A (P), let G be the set of a = (a1, · · · , aP −1) ∈ RP −1 such that for all x ∈ Ω \\ ∪ijΓij there holds rank J (u1 − a1uP , · · · , uP −1 − aP −1uP ) (x) = d + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Then ��RP −1 \\ G �� lebesgue = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Given F = {f1(x), · · · , fd⋆(x)} ∈ � C0,1 � Ω;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Rd��d⋆ as defined in proposition 8, for x ∈ Ω \\ ∪ijΓij, let Fx : Rd⋆+1 → RP as given in equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to lemma 10, we have rank Fx = d + 1 and for a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='e a ∈ RP −1, Pa ◦ Fx has rank d + 1 which means rank \uf8ee \uf8ef\uf8f0 Jf (u1 − a1uP , F) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Jf (uP −1 − aP −1uP , F) \uf8f9 \uf8fa\uf8fb (x) = d + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Denote J =J (u1 − a1uP , · · · , uP −1 − aP −1uP) and T = T (x, Ed+1,df1, · · · , Ed+1,dfd⋆, ed+1) so that \uf8ee \uf8ef\uf8f0 Jf (u1 − a1uP , F) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Jf (uP −1 − aP −1uP , F) \uf8f9 \uf8fa\uf8fb = J T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Then, rank(J T ) = d + 1, and since rank (J T ) ≤ min (rank (J ) , rank(T )), we conclude that d + 1 ≥ rank (J ) ≥ d + 1, which proves that rank(J ) = d + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' □ With the above lemma, we have now returned to a familiar setting, where no further com- plications due to the discontinuous nature of the coefficients arise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The rest of the proof of the theorem 6 now follows an argument similar to the one found in [AC22, Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='3], and a variant of the argument above to prove that the set A (P) is open, which we include in section §C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Application on an example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We revisit example 1, namely the reconstruction of the con- ductivity from the knowledge of the solution to illustrate how our result naturally extends existing results derived for uniformly regular parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In addition to assumption 1 and assumption 2, suppose that b = c = q = 0, A = γId , where γ is scalar valued function, and α = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Given P > 0 such that, A (P) ̸= ∅, and {u1, · · · , uP } ∈ A (P) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For each ℓ ∈ {1, · · · , P}, uℓ ∈ BV (Ω), and its singular part is a jump set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The union over ℓ of these jump sets is ∪i,jΓij.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Given x ∈ Γij, let n (x) be the normal pointing from Ωi to Ωj, that is, x + tn (x) ∈ Ωi for t < 0 and x + tn (x) ∈ Ωj for t > 0, provided t is small enough.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Let up be such that limt→0+ |Dup (x + tn)| = maxk∈{1,··· ,P } limt→0+ |Duk (x + tn)|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Then lim t→0+ ln |Dup (x + tn (x)) · n (x)| − ln |Dup (x − tn (x)) · n (x)| = − [ln γ (x)]ij , where [ln γ (x)]ij = lim h→x h∈Ωj ln γ (h) − lim h→x h∈Ωi ln γ (h) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The absolutely continuous part of D ln γ with respect to the Lebesgue measure is determined by ������� Du1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' DuP ������� D ln γ = ������� ∆u1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' ∆uP ������� on Ωk, k = {1, · · · , N + 1} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Remark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In particular, γ is uniquely determined up to a multiplicative constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to lemma 3, and proposition 8, there holds Jf (uℓ, F) = (γDuℓ · f1, Duℓ · f2, · · · , Duℓ · fd∗, uℓ) ∈ C0,1 (Ω) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' 8 YVES CAPDEBOSCQ AND TIANRUI DAI Because rank F = d, the discontinuities of Duℓ are included in the Γij.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For any given ℓ, it may not correspond exactly, to the entire ∪i,jΓij since Duℓ ·f1 may possibly vanish on these interfaces;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' however, rank [Jf (u1, F) , · · · , Jf (uP , F)] (x) = d + 1 for all x ∈ Ω, thus in particular, rank [(γDu1 · f1, · · · , γDuP · f1)] (x) = 1 on ∪i,jΓij, thus the set is indeed the whole ∪i,jΓij .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Equipped with all interfaces Γij, and a set of associated normal vectors, we may recover the jumps between the different regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to proposition 8, γDuℓ · f1 ∈ C0,1 (Ω) ,and f1 (x) = n(x) on each Γij .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Since limt→0+ |Dup (x + tn)| = maxk∈{1,··· ,P } limt→0+ |Duk (x + tn)| ,and not all such limit can be zero by since (u1, · · · , uP) ∈ A (P) , limt→0 ln |(γDup) (x + tn) · n| ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In particular, lim t→0+ ln |(γDup) (x + tn) · n| − ln |(γDup) (x − tn) · n| = 0, and therefore lim t→0+ ln |Dup (x + tn) · n| − ln |Dup (x − tn) · n| = − [ln γ (x)]ij .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The final identity is obtained exactly as in the regular case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' □ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof of theorem 7 We construct a group of solutions which satisfies the Jacobian constraint locally within one subdomain Ωi and extend them one subdomain at a time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' To do this rigorously, we introduce a construction map, and an associated index map, defining the order in which the extension is performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For any permutation i : {1, · · · , N + 1} → {1, · · · , N + 1}, we denote ΩIk = Ωi(1) ∪ · · ∪ Ωi(k) for k ∈ {1, · · · , N + 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We have the following definition: Definition 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We say a permutation i : {1, · · · , N + 1} → {1, · · · , N + 1} is a construction map, if the following holds : For any j ∈ {2, · · · , N + 1}, there exists a unique k (j) ∈ {1, · · · , j − 1} such that ∂Ωi(j) ∩ ∂ΩIj−1 = Γi(j)i(k(j)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' With i a construction map comes ji = � ji 1, · · · , ji N+1� the map index of i defined as follows: (1) for every s ∈ {1, · · · , N + 1}, ji s ∈ {1, · · · , N + 1}N+1, (2) The starting map ji 1 satisfies ji 1 = (i (1) , · · · , i (1)) (3) For any s ∈{2, · · · , N + 1} we have (ji s)i(ℓ) = � ji s−1� i(ℓ) if ℓ ≤ s − 1, (ji s)i(ℓ) = i (ℓ) if s = ℓ, and ℓ ≥ s + 1, (ji s)i(ℓ) is defined inductively: (ji s)i(ℓ) = (ji s)i(k(ℓ)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to assumption 1, for any i ∈ {1, · · · , N + 1}, we can always find a construction map i with i (1) = i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' A simple example is: Example 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Let Ω = Ω1 ∪Ω2 ∪Ω3 ∪Ω4 ∪Ω5 in figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Then i1 : {1, 2, 3, 4, 5} → {2, 3, 1, 5, 4} and i2 : {1, 2, 3, 4, 5} → {2, 1, 5, 4, 3} are two different construction maps with i1 (1) = i2 (1) = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We have Remark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' ji1 = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 {2, 2, 2, 2, 2} {2, 2, 3, 2, 2} {1, 2, 3, 1, 1} {1, 2, 3, 5, 5} {1, 2, 3, 4, 5} \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe and ji2 = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 {2, 2, 2, 2, 2} {1, 2, 2, 1, 1} {1, 2, 2, 5, 5} {1, 2, 2, 4, 5} {1, 2, 3, 4, 5} \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Note that for any construction map i, there holds jN+1 i = {1, · · · , N + 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS 9 5 1 2 3 4 Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' A 4 inclusion configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In the sequel, it would be convenient to assume that the Dirichlet problem boundary value problem associated to L is well posed in Ω, as it would allow us to control the norm of solutions by their boundary traces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In fact, well-posedness for a large family of sub-problems will be used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We denote L [i1, i2, · · · , iN+1] for i1, · · · , iN+1 ∈ {1, · · · , N + 1}, the second order elliptic operator with given coefficients Aij, bij, cij, qijin Ωj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We shall use the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Lemma 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' There exists some ϑ > 0 such that for any κ ∈ (0, ϑ), all Dirichlet boundary value problems associated with L [i1, · · · , iN+1] + κ where i1, · · · , iN+1 ∈ {1, · · · , N + 1} are well-posed in Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The proof of this lemma is given in section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For any κ ∈ (0, ϑ) fixed, we first prove the theorem 7 for L+κ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' To simplify notations, we write L for L+κ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to lemma 14, the Dirichlet boundary value problem associated with L [i1, · · · , iN+1] is well-posed for any i1, · · · , iN+1 ∈ {1, · · · , N + 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In the last step, we shall revert to the original operator, now L − κ, to prove the theorem 7 using the smallness of κ and the regularity of the coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The proof of theorem 7 relies on a series of lemmas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' To start the construction, we exhibit functions satisfying the requirement (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='6), which satisfy Lux i = 0 in a neighbourhood of x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Lemma 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Given j ∈ {1, · · · , N + 1} , for any σ ∈ (0, 1), there exists ε ∈ (0, 1) depending on λ, σ,d and κ given by lemma 14 only such that for any point x ∈ Ω \\ ∪i,jΓij, there exist ux 1, ux 2, · · · , ux d+1 ∈ � H1 (Ω) �d+1 such that for i ∈ {1, 2, · · · , d + 1}, L [j, · · · , j] ux i = 0 in Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Moreover there holds det J (ux 1, ux 2, · · · , ux d) (y) > σ for any y ∈ B (x, ε) ∩ Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Fix j = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Consider x = 0 ∈ Ω, and Bx = B (0, 2diamΩ) a ball centred in x containing Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In the sequel C represents any constant depending on d and λ given in assumption 2, and κ given by lemma 14, only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Note that the coefficients of L [1, · · · , 1], namely A1, b1, c1, q1 + κ, are Hölder continuous on Bx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Consider the constant coefficient partial differential operator (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1) L0 : v → −div (A1 (0) Dv + b1 (0) v) + c1 (0) · Dv + (q1 (0) + κ) v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For i = 1, · · · , d, let ui = f (xi) be the solution of constant coefficients ODE \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 − (A1)ii (0) f ′′ (t) + � ci 1 (0) − bi 1 (0) � f ′ (t) + (q1 (0) + κ) f (t) = 0 for all t ∈ R, f ′(0) = 1, f(0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Let ud+1 = f (x1) be the solution of the following second-order constant coefficients ODE initial value problem: \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 − (A1)11 (0) f ′′ (t) + � c1 1(0) − b1 1(0) � f ′ (t) + (q1 (0) + κ) f (t) = 0 for all t ∈ R, f ′(0) = 0, f(0) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' 10 YVES CAPDEBOSCQ AND TIANRUI DAI We observe that, for all i ∈ {1, · · · , d + 1} , L0ui = 0 in Ω, and det J (u1, · · · , ud+1) (0) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We now turn to solutions for the boundary value problem variable coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Set Vε = B(0, 2ε) ⊂ Bx for some ε ∈ � 0, min � 1 2, 1 2diamΩ �� to be chosen later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We shall construct ux 1, · · · , ux d+1 in H1 loc (Bx), the required construction being obtained by taking the restriction to Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Consider the d+1 Dirichlet problems � L [1, · · · , 1] vj = 0 in Vε, vj = uj on ∂Vε, j = 1, · · · , d + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Note that this problem is well posed for ε small enough.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to lemma 3, v1, v2, · · · , vd+1 are well defined and in C1,α (Vε).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Set for i = 1, · · · , d + 1, δ1 i := − (A1(x) − A1(0)) Dui − (b1(x) − b1 (0)) ui, and δ2 i := (c1(x) − c1 (0)) · Dui + (q1 (x) − q1 (0)) ui.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Then, for each i, L [1, · · · , 1] (ui − vi) = div � δ1 i � + δ2 i , and ∥Dui − Dvi∥C0,α� Vε � ≤ C ���δ1 i �� C0,α� Vε � + ��δ2 i �� C0,α� Vε � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In particular, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2) ∥Dui − Dvi∥C0, α 2 � Vε � ≤ C ���δ1 i �� C0,α� Vε � + ��δ2 i �� C0,α� Vε � � diam (Vε) α 2 ≤ Cε α 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' By an integration by part, and Poincaré’s inequality (with a constant chosen to be valid for any ε ∈ (0, 1)), ∥Dui − Dvi∥L2� Vε � ≤ C ���δ1 i �� L2� Vε � + CPoincaré ��δ2 i �� L2� Vε � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We compute, using the Hölder regularity of the parameters, � Vε � δ1 i �2 dx + � Vε � δ2 i �2 dx ≤ Cεd+2α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inserting this estimate in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2) we obtain ∥ui − vi∥H1 0 � Vε � ≤ Cεd/2+α, and using that for any x, y ∈ Vε and any f,there holds ∥f∥∞ ≤ ∥f∥C0, α 2 � Vε � diam (Vε) α 2 + |Vε|− 1 2 ∥f∥L2� Vε � ,we conclude that ∥Dui − Dvi∥L∞(Vε) ≤ Cεα.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Because of assumption 2, the operator L [1, · · · , 1] enjoys a Unique Continuation Property on a Bx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thus, for each i there exists ux i ∈ H1 (Bx) ∩ C1,α loc (Bx) such that L [1, · · · , 1] ux i = 0 on Bx, and ∥ux i − vi∥L2(Vε) < ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to lemma 3 this implies ∥Dux i − Dvi∥L∞(Bε) ≤ Cεα, where Bε = B (0, ε), and in turn, ∥Dui − Dux i ∥L∞(Bε) ≤ Cεα.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Since det J is multi-linear, det � J (u1, · · · , ud+1) − J � ux 1, · · · , ux d+1 �� ≤ (d + 1) �d+1 � i=1 |Dui| + |Dux i | �d max 1≤i≤d+1 |Dui − Dux i | Therefore sup Bε det ��J (u1, · · · , ud+1) − J � ux 1, · · · , ux d+1 ��� ≤ Cεα.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Since det J (u1, u2, · · · , ud+1) (0) = 1, for any σ ∈ (0, 1) there exists ε, depending λ, d, σ and κ only such that min Bε det J � ux 1, · · · , ux d+1 � > σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' □ JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS 11 The following lemma extends a solution across an interface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Lemma 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Let i be a construction map as defined in definition 12, and ji the associated index map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Given k ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' , N + 1}, write Γk = ∂ΩIk−1 ∩ ∂Ωi(k), and Lk = L � ji k� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Let Wk = ˚ ∪{ℓ:(jik)ℓ=(jik−1)ℓ}Ωℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In other words, Wk is the open set where coefficients of Lk are almost everywhere the same as those of Lk−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Suppose that u ∈ H1 (Wk) is a weak solution of Lku = Lk−1u = 0 in Wk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For any δ > 0, there exists an open set U, such that Wk ∪ Γk ⊂ U ⊂ Ω and v ∈ H1(U) such that Lkv = 0 in U and ∥u − v∥H1(Wk) < δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Suppose that Γk = Γ1i(k), that is, the subdomain within ΩIk for whom Γk is a connected component of its boundary is Ω1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Write i(k) = k, and ω1 = Ω1 ∩ {x : d (x, Γk) < d0}, where d0 is given by equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to assumption 1, there exists a C2 diffeomorphism ψk : U1,k → V1,k, Γk �→ ∂B1, where U1,k and V1,k are neighbourhoods of Γk and ∂B1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Take η > 0 small enough such that ψ−1 k (∂B1−η) ⊂ U1,k ∩ ω1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Take t ∈ � 1 2, 1 � and set U t := � ψ−1 k x : tx ∈ ∂B1 \\ ∂B1−η � = � ψ−1 k x : x ∈ ∂B 1 t \\ ∂B 1 t (1−η) � ⊂ U1,k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' An example of such a construction is illustrated in figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In what follows, C is any constant, which may change from line to line, depending on Ω, Ω1, d, λ, κ and ��ψ−1 k �� C2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Write Y := U t∩Ω1, G := U t ∩ Ωi(k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Define ut(x) = u � ψ−1 k (tψk (x)) � ∈ H1 � U t� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' There exists some η0 > 0, such that for any 0 < η < η0, and any t ∈ � 1 2, 1 � there holds (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='3) ∀u ∈ H1 0 � U t� , ⟨Lku, u⟩H−1(Ut),H1 0 (Ut) ≥ 1 3λ ∥u∥2 H1 0 (Ut) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We establish this claim in section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' 5 1 2 3 4 Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For the example shown in figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1, this illustrates an extension across Γ15, the dashed line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The green area correspond U t 1,5, with t = 1 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In this example, the Dirichlet problem is L [1, 2, 3, 5, 5]u = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Consider the Dirichlet boundary value problem in U t � Lv = 0 in U t, v = ut on ∂U t, which is well-posed thanks to (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We estimate � L � ut − v � , ut − v � H−1(Ut)×H1(Ut) = � Lut, ut − v � H−1(Ut)×H1(Ut) ≤ |J0| + |J1| + |J2| , where J0 = � Y AD � ut − u � D � ut − v � + � ut − u � (b + c) · D � ut − v � + (q + κ) � ut − u � � ut − v � dx, 12 YVES CAPDEBOSCQ AND TIANRUI DAI J1 = � G ADut · D � ut − v � + but · D � ut − v � + c · Dut � ut − v � + (q + κ) ut � ut − v � dx, and J2 = − � Γk � ADut + but� n � ut − v � dS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to the Hölder regularity of u and Du, see lemma 3, ��ut (x) − u(x) �� = ��u � ψ−1 k (tψk (x)) � − u � ψ−1 k (ψk (x)) ��� ≤ C |1 − t|α ∥u∥C0,α(ω1) , Similarly, ��Dut (x) − Du(x) �� ≤ C |1 − t|α ∥Du∥C0,α(ω1) , and altogether |J0| ≤ C |1 − t|α ∥u∥C1,α(ω1) ��v − ut�� H1(Ut) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' To estimate J1, we write |J1| ≤ C ��ut�� H1(G) ��v − ut�� H1(Ut) , and by interpolation, ��ut�� H1(G) ≤ C � ∥u∥L∞(Ω1) + ∥Du∥L∞(Ω1) � |G| 1 2 ≤ C � ∥u∥L∞(Ω1) + ∥Du∥L∞(Ω1) � (1 − t) 1 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thus altogether, writing β = min � α, 1 2 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='4) |J0| + |J1| ≤ C ∥u∥C1,α(Ω1) ��v − ut�� H1(Ut) |1 − t|β .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Note that J2 ≤ ∥(ADu + bu) · n∥L2(Γk) ��ut − v �� L2(Γk) ≤ C � ∥u∥L∞(Ω1) + ∥Du∥L∞(Ω1) � ��ut − v �� L2(Γk) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='5) Note that for every x ∈ Γk, ψ−1 k � 1 t ψk (x) � ∈ ∂U t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Since v = ut on ∂U t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' we find on Γk � ut − v � (x) = � ut − v � � ψ−1 k ψk (x) � − � ut − v � � ψ−1 k �1 t ψk (x) �� = � 1 1 t D �� ut − v � ψ−1 k � (θx) · xdθ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Applying Cauchy–Schwarz,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' we find ��� ut − v � (x) �� ≤ C |1 − t| 1 2 �� 1 1 t ��D �� ut − v � ψ−1 k � (θx) ��2 dθ � 1 2 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' and integrating over Γk ��ut − v ��2 L2(Γk) ≤ C |1 − t| � Γk � 1 1 t ��D �� ut − v � ψ−1 k � (θx) ��2 dθdx ≤ C |1 − t| ��ut − v ��2 H1(G) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='6) In turn, combining (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='4), (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='5) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='6), ��� L � ut − v � , ut − v ��� ≤ C |1 − t|β ∥u∥C1,α(Ω1) ��ut − v �� H1(Ut) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='3), this implies ��ut − v �� H1(Ut) ≤ C |1 − t|β ∥u∥C1,α(Ω1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS 13 For every fixed t consider the following system (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='7) \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 LkS = 0 in Ω \\ ψ−1 k � ∂B 1 t (1−η) � S = 0 on ∂Ω [S] = u − v on ψ−1 k � ∂B 1 t (1−η) � [(ADS + bS) · n] = (ADu + bu) · n − (ADv + bv) · n on ψ−1 k � ∂B 1 t (1−η) � , Where [·] denotes the jump across the boundary, thanks to lemma 14, this problem is well posed, and there exists some S ∈ H1 � Ω \\ ψ−1 k � ∂B 1 λ (1−η) �� solution of equation (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Moreover there holds: ∥S∥ H1 � Ω\\ψ−1 k � ∂B 1 λ (1−η) �� ≤ C � ∥u − v∥ H1/2 � ψ−1 k � ∂B 1 λ (1−η) �� + ∥(AD (u − v) + b (u − v)) · n∥ H−1/2 � ψ−1 k � ∂B 1 λ (1−η) �� � ≤ C ∥u − v∥H1(Y ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Using the triangle inequality, this yields, ∥S∥ H1 � Ω\\ψ−1 k � ∂B 1 λ (1−η) �� ≤ C ���ut − u �� H1(Y ) + ��ut − v �� H1(Y ) � ≤ C (1 − t)β ∥u∥C1,α(Ω1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Take ˜vt = v1Ut + 1 (Wk\\Ω1)∪ψ−1 k � B 1 t (1−η) �u + S1 Ω\\ψ−1 k � ∂B 1 t (1−η) �.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' By construction, we have ˜vt ∈ H1 (Wk ∪ Γk ∪ U t) and there holds ∥˜vt − u∥H1(Wk) ≤ C (1 − t)β ∥u∥C1,α(Ω1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The conclu- sion follows choosing t close enough to 1, and U = Wk ∪ Γk ∪ U t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' □ The third step is to extend the solution to the whole Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Lemma 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' With the notations of (16), for any ε > 0, there exists a weak solution of v ∈ H1 (Ω) of Lkv = 0 in Ω such that ∥v − u∥H1(U) < ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Note that on Vk = Ω \\ Wk the coefficients of Lk are not discontinuous, and the Unique Continuation Property holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' As a result there exists a sequence of functions (un)n∈N ∈ H1 (Vk)N such that Lkun = 0 on Vk and ∥un − u∥H1(U∩Vk) ≤ 1 n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' which implies that ∥un − u∥H1/2(∂Wk) ≤ ∥un − u∥H1/2(∂(U∩Vk)) ≤ ∥un − u∥H1(U∩Vk) ≤ C n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Let ν be the outer normal vector of ∂Wk and let F1 : H1 (Wk) → H−1/2 (∂Wk) u �→ � ˜A|WkDu + ˜b|Wku � ν and F2 : H1 (U \\ Wk) → H−1/2 (∂Wk) u �→ � ˜A|VkDu + ˜b|Vku � ν 14 YVES CAPDEBOSCQ AND TIANRUI DAI Since u ∈ H1 (U) is a weak solution of Lku = 0 in U, there holds F1 (u) = F2 (u) on ∂ΩIk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' As a result, ∥F1 (u) − F2 (un)∥H−1/2(∂Wk) ≤ ∥F2 (u) − F2 (un)∥H−1/2(∂Wk) ≤ ∥un − u∥H1(U∩Vk) ≤ C n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Consider the following system in Ω (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='8) \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Lksn = 0 in Ω \\ ∂Wk sn = 0 on ∂Ω [sn] = u − un on ∂Wk [(ADsn + bsn) · ν] = F1 (u) − F2 (un) on ∂Wk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' lemma 14 implies that there exists sn ∈ H1 (Ω \\ ∂Wk), a weak solution of equation (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='8) and there holds (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='9) ∥sn∥H1(Ω\\∂Wk) ≤ C � ∥F1 (u) − F2 (un)∥H−1/2(∂Wk) + ∥u − un∥H1/2(∂Wk) � ≤ C n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Let vn = sn1Ω\\∂Wk + u1U + 1Vkun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' By construction, vn ∈ H1 (Ω) is a weak solution of equa- tion (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Moreover, we have ∥vn − u∥H1(U) ≤ ∥sn∥H1(Ω\\∂Wk) + ∥un − u∥H1(U\\Wk) ≤ C n , and the conclusion follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' □ We now turn to the proof of theorem 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' (2, 2, 2, 2, 2) 5 1 2 3 4 (2, 2, 3, 2, 2) 5 1 2 3 4 (1, 2, 3, 1, 1) 5 1 2 3 4 (1, 2, 3, 5, 5) 5 1 2 3 4 (1, 2, 3, 4, 5) 5 1 2 3 4 Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' A construction following the construction map i : {1, 2, 3, 4, 5} → {2, 3, 1, 5, 4}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Every colour represents one set of regular coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' At each step, all the subdomains within which the construction has not been performed have the same parameters as the subdomain where the solution is constructed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof of theorem 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Given σ > 0 and x ∈ Ω \\ ∪i̸=jΓij, we choose a construction map i ∈ SN+1 such that the starting point x is in the first set, x ∈ ΩI1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Using lemma 15 for the first step, and then applying lemma 16 and lemma 17 inductively, with L1 = L � ji 1� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' LN+1 = L � ji N+1� = L, the conclusion follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' □ We now turn to original operator (which is represented by Loriginal = L−κ), to prove theorem 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS 15 Proof of theorem 7 for Loriginal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to theorem 7 for L = Loriginal + κ there holds Claim 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For any σ > 0, there exists ε > 0 such that for any x ∈ Ω \\ ∪i̸=jΓij there exists d + 1 solutions denoted as ux 1, ux 2, · · · , ux d+1 such that ux i ∈ H1 (Ω) and Lux i = 0 in Ω for i ∈ {1, 2, · · · , d + 1}, and ��det J � ux 1, ux 2, · · · , ux d+1 ��� (y) > σ, for any y ∈ B (x, ε) ∩ Ωj, j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' , N + 1} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' If the Dirichlet boundary value problem associated with Loriginal is well-posed, then for any i ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' , d + 1} and any x ∈ Ω \\ ∪i̸=jΓij, consider the following Dirichlet boundary value problem: � Loriginalvx i = 0 in Ω vx i = ux i on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Then, vx i − ux i ∈ H1 0 (Ω) satisfies Loriginal (vx i − ux i ) = κux i in Ω, and thanks to the well-posedness of Loriginal, we have ∥vx i − ux i ∥H1 0 (Ω) ≤ Cκ, where the finite constant C is independent of x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to the regularity of Loriginal in Ωj, we have ∥vx i − ux i ∥C1,α(Ωj) ≤ Cκ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Take κ small enough (since κ ∈ (0, ϑ) is chosen arbitrarily ) and take a corresponding ε given in claim 18 for L, thanks to the multi-linearity of det J, we conclude that ��det J � vx 1, ux 2, · · · , vx d+1 ��� (y) > σ for any y ∈ B (x, ε) ∩ Ωj, j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' , N + 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' If the Dirichlet boundary value problem associated with Loriginal is not well-posed, the kernel of the solution map, written ker (Loriginal) to avoid introducing additional notations, is finite dimensional, and not empty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For any x ∈ Ω \\ ∪i̸=jΓij and any i ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' , N + 1}, take ux i = u1 + u2 where u1 ∈ ker(Loriginal) ⊂ H1 0 (Ω) ⊂ L2 (Ω) , u2 ∈ ker (Loriginal)⊥ ⊂ L2 (Ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' By the Fredholm alternative, there exists a unique v2 ∈ H1 (Ω) such that v2 − u2 ∈ H1 0 (Ω)∩ker(Loriginal)⊥ satisfies � Loriginal (v2 − u2) = −Loriginalu2 = κu in Ω, v2 − u2 = 0 on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Furthermore, ∥v2 − u2∥H1 0 (Ω) ≤ Cκ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Choose vx i = u1 + v2, which satisfies∥vx i − ux i ∥H1(Ω) ≤ Cκ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Taking κ small enough, thanks to the regularity of the coefficients in each subdomain and the multi-linearity of det J, we conclude that ��det J � vx 1, ux 2, · · · , vx d+1 ��� (y) > σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' □ 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof Of Proposition 8 We recall the definition of the geometric complement of an open set Ω ⊂ Rd, which is the smallest open set Π ⊂ Rd such that Ω ⊂ Π and the genus of Π equals to zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Definition 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Given any open set U ⊂ Ω, we write gU = # {j ∈ {1, · · · , N + 1} : Ωj ⊂ U} which is the number of pieces contained in U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' By construction, we have gΩ = N + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Lemma 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Set h1 = (x1, · · · , xd) on Sd−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' When d = 2, 4, or 8, there exists {h2, · · · , hd} ∈ � C1 � Sd−1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Rd��d−1 such that (h1, h2, · · · , hd) ∈ SOd � Sd−1� where SOd refers to the real unitary matrices with positive determinant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Otherwise, d ≥ 3 there exists {h2, · · · , hd+1} ∈ � C1 � Sd−1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Rd��dsuch that (h1, h2, · · · , hd+1) ∈ SOd+1 � Sd−1� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' This lemma is proved in section §B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' 16 YVES CAPDEBOSCQ AND TIANRUI DAI 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof of Proposition 8 when d = 2, 4 or 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Let Πi be the geometric complement of Ωi, where i ∈ {1, · · · , N}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' There exists a C1,1 diffeomorphism Hi : B2 Hi → Πi, which induces a C0,1 bijection on the vector fields: DHi : C0,1 � B2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Rd� → C0,1 � Πi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Rd� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' It is a map which maps SOd to SOd since the degree of Hi is either 1 or −1 and moreover it maps the tangent vectors (respectively the normal vector) on the sphere to the tangent vectors (respectively the normal vector) on ∂Πi, which is the outer boundary of Ωi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Take Bri ⊂ B2 ⊂ Br∗ i , 1 < ri < 2 < r∗ i , such that gHi(Bri) + 1 = gΠi = gHi � Br∗ i � and such that the genus of the Πi \\ Hi (Bri) equals to the genus of Hi � Br∗ i � \\ Πi, and equals one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In particular any Ωj, j ̸= i, contained in Πi are contained in Hi (Bri) and Hi � Br∗ i � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Applying lemma 20 with R = 2, when d = 2, 4, 8 , there exists {h1, · · · , hd} in B2 a group of C1 unit vector fields on ∂B2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We construct {f1, · · · , fd} ∈ C0,1 � Hi � Br∗ i � \\ Hi (Bri);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' SOd � as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Criterion 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' (1) {f1, · · · , fd} = {DHi (h1) , · · · , DHi (hd)} on ∂Πi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' (2) On Hi (∂Bri) and Hi � ∂Br∗ i � , let {f1, · · · , fd} = {e1, · · · , ed}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In other words, we have (f1, · · · , fd) = Id on Hi (∂Bri) and Hi � ∂Br∗ i � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' (3) Since SOd is path connected, at each x ∈ ∂Bri there exists S ∈ C0,1 (∂Bri × [ri, r∗ i ] ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' SOd) a path such that S (x, ri) = DH−1 i (Id), S (x, 2) = (h1, · · · , hd) � 2 x ∥x∥ � and S (x, r∗ i ) = DH−1 i (Id).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' There holds |S (x, r) − S (x, r′)| ≤ C(d) r∗ i − ri |r − r′| , and ∥DxS (x, r)∥∞ ≤ C(d) ��DH−1 i �� ∞ ∥(Dh1, · · · , Dhd)∥∞ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' (4) For any r ∈ (ri, r∗ i ), set (f1, · · · , fd) � Hi � r x ∥x∥ �� = DHi (Sx (r)) := DHi (S (x, r)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In the construction above, for any x ∈ Hi � Br∗ i � \\ Hi (Bri), we have (f1, · · · , fd) (x) ∈ SOd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Moreover since (f1, · · · , fd) is constructed by a composition of Lipschitz maps, {f1, · · · , fd} is of class C0,1 in Hi � Br∗ i � \\ Hi (Bri) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Indeed, ∥fk (Hi (x)) − fk (Hi (y))∥ ≤ ����fk � Hi � ∥x∥ x ∥x∥ �� − fk � Hi �∥y∥ + ∥x∥ 2 x ∥x∥ ������ + ����fk � Hi �∥x∥ + ∥y∥ 2 x ∥x∥ �� − fk � Hi �∥y∥ + ∥x∥ 2 y ∥y∥ ������ + ����fk � Hi �∥x∥ + ∥y∥ 2 y ∥y∥ �� − fk � Hi � ∥y∥ y ∥y∥ ������ = ����DHi (Sx (∥x∥)) − DHi � Sx �∥y∥ + ∥x∥ 2 ������ (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1) + ����DHi (Sy (∥y∥)) − DHi � Sy �∥y∥ + ∥x∥ 2 ������ + ����DHi � Sx �∥y∥ + ∥x∥ 2 �� − DHi � Sy �∥y∥ + ∥x∥ 2 ������ ≤ C (d) � 1 r∗ i − ri + ��DH−1 i �� ∞ ∥(Dh0, · · · , Dhd−1)∥∞ � ∥x − y∥ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Note that for each i ∈ {1, · · · , N}, we have (f1, · · · , fd) = Id on ∂ � Hi � Br∗ i � \\ Hi (Bri) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Set (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2) (f1, · · · , fd) = Id in Q := Ω \\ ∪N i=1Hi � Br∗ i � \\ Hi (Bri) JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS 17 In each Hi � Br∗ i � \\ Hi (Bri), {f1, · · · , fd} is of class C0,1, continuous on ∂ � Hi � Br∗ i � \\ Hi (Bri) � and Lipschitz continuous in Q thanks to 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thus it is of class C0,1 in the whole Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' To conclude the proof of proposition 8, we now check that for every u ∈ H (Ω), such that Lu = 0 in Ω, there holds Jf (u, F) = ((A∇u + bu) · f1, ∇u · f2, · · · , ∇u · fd, u) is of class C0,α in Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Note that for each Hi � Br∗ i \\ Bri � , there exists only one j ∈ {1, · · · , N + 1} \\ {i} such that Ωj ∩ Hi � Br∗ i \\ Bri � ̸= ∅ and Γij = Hi (∂B2) ⊂ Hi � Br∗ i \\ Bri � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to the continuity of the flux (ADu + bu)·n = (ADu + bu)·f1 on Γij , the Lipschitz continuity of F ,the C0,α continuity of Du, u, A and B in Ωi or Ωj, we conclude that Jf (u, F) is of class C0,α in each Hi � Br∗ i \\ Bri � and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Moreover, we note that on each ∂Hi � Br∗ i \\ Bri � , the coefficients A and b are uniformly C0,α, as they are in the interior of Ωi or Ωj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Therefore, we have Jf (u, F) is of class C0,α on ∂Q \\ ∂Ω = ∪i∂Hi � Br∗ i \\ Bri � (Note that for different k and s, ∂Hk � Br∗ k \\ Brk � ∩ ∂Hs � Br∗s \\ Brs � = ∅).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In particular it is continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thus Jf (u, F) is of class C0,α on Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' □ 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof of Proposition 8 for other dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Let Πi be the geometric complement of Ωi, where i ∈ {1, · · · , N}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' There exists a C1,1 diffeomorphism Hi : B2 Hi → Πi, which induces a C0,1 bijection on the vector fields: DHi : C0,1 � B2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Rd� → C0,1 � Πi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Rd� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' It is a map which maps SOd to SOd since the degree of Hi is either 1 or −1 and moreover it maps the tangent vectors (respectively the normal vector) on the sphere to the tangent vectors (respectively the normal vector) on ∂Πi, which is the outer boundary of Ωi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Take Bri ⊂ B2 ⊂ Br∗ i , 1 < ri < 2 < r∗ i , such that gHi(Bri) + 1 = gΠi = gHi � Br∗ i � and such that the genus of the Πi \\ Hi (Bri) equals to the genus of Hi � Br∗ i � \\ Πi, and equals one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In particular any Ωj, j ̸= i, contained in Πi are contained in Hi (Bri) and Hi � Br∗ i � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For any M = (mij)(d+1)×(d+1) ∈ Rd+1 × Rd+1, we write P (M) = (mi,j)(d+1)×d .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to lemma 20, we construct {f1, · · · , fd+1} ∈ C0,1 � Hi � Br∗ i � \\ Hi (Bri);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Rd�d+1 with rank equals to d as follows: (1) {f1, · · · , fd+1} = {DHi (h1) , · · · , DHi (hd+1)} on ∂Πi (2) On Hi (∂Bri) and Hi � ∂Br∗ i � , let {f1, · · · , fd+1} = P (Id+1) (3) There exists a C0,1 path S : ∂Bri × [ri, r∗ i ] → SOd+1 such that S(x, ri) = DH−1 i (Id+1), S(2) = DH−1 i (Hd (x)) (where Hd is given in equation (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1)) and S (r∗ i ) = DH−1 i (Id+1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For any r ∈ (ri, r∗ i ) and x ∈ ∂Bri, take (f1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' , fd) (Hi � rx ri � ) = P (DHi (S (x, r))).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Since for any x ∈ ∂Bri and r ∈ [ri, r∗ i ] ,S (x, r) ∈ SOd+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We have rank S (x, r) = d + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' There- fore, rank PS (x, r) = d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' As before, we conclude that {f1, · · · , fd+1} is also of class C0,1 in Hi � Br∗ i � \\ Hi (Bri).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Note that for each i ∈ {1, · · · , N}, we have (f1, · · · , fd+1) = P (Id+1) on ∂ � Hi � Br∗ i � \\ Hi (Bri) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Set (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='3) (f1, · · · , fd+1) = P (Id+1) in Q := Ω \\ ∪N i=1Hi � Br∗ i � \\ Hi (Bri) As we proved before, in each Hi � Br∗ i � \\ Hi (Bri), {f1, · · · , fd} is of class C0,1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' It is continuous on ∂ � Hi � Br∗ i � \\ Hi (Bri) � and Lipschitz continuous in Q thanks to equation (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='3), and therefore of C0,1 globally on Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The rest of the proof is identical to that given when d = 2, 4 or 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' □ References [ABC+08] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Ammari, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Bonnetier, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Capdeboscq, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Tanter, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Fink.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Electrical imped- ance tomography by elastic deformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 68(6):1557–1573, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1137/070686408, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1137/070686408.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' 18 YVES CAPDEBOSCQ AND TIANRUI DAI [AC18] Giovanni S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Alberti and Yves Capdeboscq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Lectures on elliptic methods for hybrid inverse problems, volume 25 of Cours Spécialisés [Specialized Courses].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Société Mathématique de France, Paris, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [AC22] Giovanni S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Alberti and Yves Capdeboscq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Combining the Runge approximation and the Whit- ney embedding theorem in hybrid imaging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 2022(6):4387–4406, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1093/imrn/rnaa162.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [ACdG+11] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Ammari, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Capdeboscq, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' de Gournay, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Rozanova-Pierrat, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Triki.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Mi- crowave imaging by elastic deformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 71(6):2112–2130, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1137/110828241, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1137/110828241.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [ADCFV17] Giovanni Alessandrini, Michele Di Cristo, Elisa Francini, and Sergio Vessella.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Stability for quantitative photoacoustic tomography with well-chosen illuminations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Pura Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' (4), 196(2):395–406, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1007/s10231-016-0577-4, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1007/s10231-016-0577-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [AGK+17] Habib Ammari, Josselin Garnier, Hyeonbae Kang, Loc Hoang Nguyen, and Laurent Seppecher.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Multi-Wave Medical Imaging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' WORLD SCIENTIFIC (EUROPE), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='worldscientific.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='com/doi/abs/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1142/q0067, arXiv:https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='worldscientific.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='com/doi/pdf/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1142/q0067, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1142/q0067.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [AK04] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Ammari and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Kang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Reconstruction of Small Inhomogeneities from Boundary Measurements, volume 1846 of Lecture Notes in Mathematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Springer, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [AKKR18] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Adesokan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Knudsen, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Krishnan, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Roy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' A fully non-linear optimiza- tion approach to acousto-electric tomography.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inverse Probl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 34(10):16, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Id/No 104004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1088/1361-6420/aad6b1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Alb22] Giovanni S Alberti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Non-zero constraints in elliptic pde with random boundary values and applications to hybrid inverse problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inverse Problems, 38(12):124005, oct 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: https://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1088/1361-6420/ac9924, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1088/1361-6420/ac9924.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Ale86] Giovanni Alessandrini.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' An identification problem for an elliptic equation in two variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Pura Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' (4), 145:265–295, 1986.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1007/BF01790543, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1007/BF01790543.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Ale88] Giovanni Alessandrini.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Stable determinations of conductivity by boundary measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 27(1-3):153–172, 1988.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1080/00036818808839730.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Ale14] Giovanni Alessandrini.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Global stability for a coupled physics inverse problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inverse Problems, 30(7):075008, 10, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1088/0266-5611/30/7/075008, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1088/0266-5611/30/7/075008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [AN01] Giovanni Alessandrini and Vincenzo Nesi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Univalent σ-harmonic mappings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Arch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Ration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Mech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 158(2):155–171, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1007/PL00004242, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1007/PL00004242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [AN15] Giovanni Alessandrini and Vincenzo Nesi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Quantitative estimates on Jacobians for hybrid inverse problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Bulletin of the South Ural State University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Series “Mathematical Modelling, Programming & Computer Software”, 8(3):25–41, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Arr99] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Arridge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Optical tomography in medical imaging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inverse Problems, 15(2):R41, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://stacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='iop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/0266-5611/15/i=2/a=022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Bal13] Guillaume Bal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Hybrid inverse problems and internal functionals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In Inverse problems and applications: inside out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' II, volume 60 of Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Publ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', pages 325–368.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Cambridge Univ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Press, Cambridge, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [BBMT13] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Bal, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Bonnetier, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Monard, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Triki.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inverse diffusion from knowledge of power densit- ies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inverse Probl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Imaging, 7(2):353–375, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='3934/ipi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='353, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='3934/ipi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='353.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [BCT22] Eric Bonnetier, Mourad Choulli, and Faouzi Triki.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Stability for quantitative photoacoustic tomography revisited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 9(2):30, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Id/No 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1007/s40687-022-00322-6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [BGM14] Guillaume Bal, Chenxi Guo, and François Monard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inverse anisotropic conductiv- ity from internal current densities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inverse Problems, 30(2):025001, 21, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1088/0266-5611/30/2/025001, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1088/0266-5611.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [BM58] Raoul Bott and John Milnor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' On the parallelizability of the spheres.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Bulletin of the American Math- ematical Society, 64(3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' P1):87–89, 1958.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [BMN01] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Bauman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Marini, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Nesi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Univalent solutions of an elliptic system of partial differ- ential equations arising in homogenization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Indiana Univ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 50(2):747–757, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1512/iumj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1832, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1512/iumj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1832.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [BR11] Guillaume Bal and Kui Ren.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Multi-source quantitative photoacoustic tomo- graphy in a diffusive regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inverse Problems, 27(7):075003, 20, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1088/0266-5611/27/7/075003, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1088/0266-5611.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Bro62] Felix E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Browder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' On approximation by solutions of partial differential equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Bull.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Am.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 68:36–38, 1962.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1090/S0002-9904-1962-10691-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [BU10] Guillaume Bal and Gunther Uhlmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inverse diffusion theory of photoacoustics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inverse Problems, 26(8):085010, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://stacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='iop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/0266-5611/26/i=8/a=085010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [BU13] Guillaume Bal and Gunther Uhlmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Reconstruction of coefficients in scalar second-order elliptic equations from knowledge of their solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Pure Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 66(10):1629–1652, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1002/cpa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='21453.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS 19 [Cal80] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Calderón.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' On an inverse boundary value problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In Seminar on Numerical Analysis and its Applications to Continuum Physics (Rio de Janeiro, 1980), pages 65–73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Brasil.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', Rio de Janeiro, 1980.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Cap15] Yves Capdeboscq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' On a counter-example to quantitative Jacobian bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Éc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' polytech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 2:171–178, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='5802/jep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='21, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='5802/jep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [CFdGK09] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Capdeboscq, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Fehrenbach, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' de Gournay, and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Kavian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Imaging by modification: numer- ical reconstruction of local conductivities from corresponding power density measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Imaging Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 2(4):1003–1030, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Cho21] Mourad Choulli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Some stability inequalities for hybrid inverse problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', Acad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Paris, 359(10):1251–1265, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='5802/crmath.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='262.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [CK98] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Colton and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Kress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inverse acoustic and electromagnetic scattering theory, volume 93 of Applied Mathematical Sciences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Springer-Verlag, Berlin, second edition, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [CLR20] Mihajlo Cekić, Yi-Hsuan Lin, and Angkana Rüland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The Calderón problem for the fractional Schrödinger equation with drift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Calc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Partial Differ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Equ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 59(3):46, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Id/No 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1007/s00526-020-01740-6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [CT19] Mourad Choulli and Faouzi Triki.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Hölder stability for an inverse medium problem with internal data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 6(1):15, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Id/No 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1007/s40687-018-0171-z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [CV03] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Capdeboscq and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='˜S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Vogelius.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' A general representation formula for boundary voltage perturb- ations caused by internal conductivity inhomogeneities of low volume fraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' M2AN Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Numer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 37(1):159–173, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [DDBR19] Neda Davoudi, Xose Luis Dean-Ben, and Daniel Razansky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Deep learning optoacoustic tomo- graphy with sparse data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' NATURE MACHINE INTELLIGENCE, 1(10):453–460, OCT 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1038/s42256-019-0095-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Gia93] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Giaquinta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Introduction to Regularity Theory for Nonlinear Elliptic systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Lectures in mathem- atics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Birkhauser Verlag, 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [GW75] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Greene and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Wu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Whitney’s imbedding theorem by solutions of elliptic equations and geometric consequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In Differential geometry (Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Sympos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Pure Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' XXVII, Part 2, Stanford Univ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', Stanford, Calif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 1973), pages 287–296.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Amer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', Providence, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 1975.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [HMY+04] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Hasanov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Ma, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Yoon, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Nachman, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Joy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' A new approach to cur- rent density impedance imaging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In Engineering in Medicine and Biology Society, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' IEMBS ’04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' 26th Annual International Conference of the IEEE, volume 1, pages 1321–1324, Sept 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1109/IEMBS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1403415.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Ker58] Michel A Kervaire.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Non-parallelizability of the n-sphere for n> 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proceedings of the National Academy of Sciences, 44(3):280–283, 1958.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [KK11] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Kuchment and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Kunyansky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Mathematics of Photoacoustic and Thermoacoustic Tomo- graphy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In Otmar Scherzer, editor, Handbook of Mathematical Methods in Imaging, pages 817–865.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Springer New York, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1007/978-0-387-92920-0_19, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1007/978-0-387-92920-0_19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Kuc12] Peter Kuchment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Mathematics of hybrid imaging: a brief review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In The mathematical legacy of Leon Ehrenpreis, volume 16 of Springer Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', pages 183–208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Springer, Milan, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1007/978-88-470-1947-8_12, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1007/978-88-470-1947-8_12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Lax56] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Lax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' A stability theorem for solutions of abstract differential equations, and its application to the study of the local behavior of solutions of elliptic equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Pure Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 9:747–766, 1956.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [LJC00] B Lavandier, J Jossinet, and D Cathignol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Experimental measurement of the acousto- electric interaction signal in saline solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Ultrasonics, 38(9):929–936, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1016/S0041-624X(00)00029-9, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1016/S0041-624X(00)00029-9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Mal56] Bernard Malgrange.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Existence et approximation des solutions des équations aux dérivées partielles et des équations de convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Fourier, Grenoble, 6:271–355, 1955–1956.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Man01] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Mandache.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Exponential instability in an inverse problem for the Schrödinger equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inverse Problems, 17(5):1435, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://stacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='iop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/0266-5611/17/i=5/a=313.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [NTT11] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Nachman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Tamasan, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Timonov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Current density impedance imaging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Tomography and Inverse Transport Theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Contemporary Mathematics (G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Bal, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Finch, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Kuchment, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Stefanov, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Uhlmann, Editors), 559:035014, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [NY79] Shlomo P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Neuman and Sidney Yakowitz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' A statistical approach to the inverse prob- lem of aquifer hydrology: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Water Resources Research, 15(4):845–860, 1979.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: https://agupubs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='onlinelibrary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='wiley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='com/doi/abs/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1029/WR015i004p00845, arXiv:https://agupubs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='onlinelibrary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='wiley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='com/doi/pdf/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1029/WR015i004p00845, doi:https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1029/WR015i004p00845.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [RRN09] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Rosenthal, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Razansky, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Ntziachristos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Quantitative optoacoustic signal extraction using sparse signal representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Medical Imaging, IEEE Transactions on, 28(12):1997–2006, Dec 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1109/TMI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2027116.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Sch15] Scherzer, editor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Handbook of Mathematical Methods in Imaging, volume I,II,III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Springer, 2nd edition, July 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' 20 YVES CAPDEBOSCQ AND TIANRUI DAI [SJAH91] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Scott, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Joy, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Armstrong, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Henkelman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Measurement of nonuniform current density by magnetic resonance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Medical Imaging, IEEE Transactions on, 10(3):362–374, Sep 1991.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1109/42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='97586.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [SKL+12] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Seo, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Kim, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Lee, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Kwon, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Sajib, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Woo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Electrical tissue prop- erty imaging using MRI at dc and Larmor frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inverse Problems, 28(8):084002, 26, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1088/0266-5611/28/8/084002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [SW11] Jin Keun Seo and Eung Je Woo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Magnetic resonance electrical impedance tomography (MREIT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' SIAM review, 53(1):40–68, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://epubs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='siam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/doi/pdf/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1137/080742932.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Uhl09] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Uhlmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Electrical impedance tomography and Calderón’s problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inverse Problems, 25(12):123011, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://stacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='iop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/0266-5611/25/i=12/a=123011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Uhl14] Gunther Uhlmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' 30 years of Calderón’s problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Sémin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Laurent Schwartz, EDP Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 2012- 2013:ex, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='5802/slsedp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [WA11] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Wang and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Anastasio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Photoacoustic and Thermoacoustic Tomography: Image Form- ation Principles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Scherzer, editor, Handbook of Mathematical Methods in Imaging, pages 781–815.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Springer New York, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1007/978-0-387-92920-0_18, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1007/978-0-387-92920-0_18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [WLM94] Eung J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Woo, Soo Yeol Lee, and Chi Woong Mun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Impedance tomography using internal current density distribution measured by nuclear magnetic resonance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' SPIE, 2299:377–385, 1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1117/12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='179269, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1117/12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='179269.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [Woo91] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Wood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Lewy’s theorem fails in higher dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Scand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=', 69(2):166 (1992), 1991.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [WS12] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Widlak and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Scherzer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Hybrid tomography for conductivity imaging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Inverse Prob- lems, 28(8):084008, 28, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1088/0266-5611/28/8/084008, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1088/0266-5611/28/8/084008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' [ZW04] Hao Zhang and Lihong V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Acousto-electric tomography.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' SPIE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Photons Plus Ultra- sound: Imaging and Sensing, 5320:145–149, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' URL: http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1117/12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='532610, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1117/12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='532610.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Additional Proofs A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof of lemma 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof of lemma 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Given v ∈ H1 0 (Ω) ,there holds, using the a priori bounds 2, Cauchy-Schwarz and completing a square, ⟨Lv, v⟩H−1(Ω)×H1(Ω) = � Ω ADu · Du + (b + c) Du · u + qu2dx ≥ λ ∥Du∥2 L2(Ω) − 2λ−1 ∥Du∥L2(Ω) ∥u∥L2(Ω) − λ−1 ∥u∥2 L2(Ω) ≥ λ � ∥Du∥L2(Ω) − λ−2 ∥u∥L2(Ω) �2 − � λ−1 + λ−3� ∥u∥2 L2(Ω) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thus writing M = λ−1 + λ−3 + 1, for any i1, · · · , iN+1 ∈ {1, · · · , N + 1}N+1, all Dirichlet bound- ary value problems associated with L [i1, · · · , iN+1] + MId are well-posed in Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' If the Dirichlet boundary value problem associated with Li := L [i1, · · · , iN+1] is not well-posed, there exists a non-zero solution of � Liu = 0 in Ω u = 0 on ∂Ω Consider (Li + MId)−1 as a linear operator from L2 (Ω) to L2 (Ω) ∩ H1 0 (Ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The ill-posedness of Li implies that M −1 ∈ σ � (Li + MId)−1� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to the Rellich–Kondrachov embedding, (Li + MId)−1 : L2 (Ω) → L2 (Ω) is a compact linear operator acting on L2 (Ω), therefore M −1 is an isolated eigenvalue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' That is, there exists ℵ1 [i1,··· ,iN+1] > 0 such that B � M −1, ℵ1 [i1,··· ,iN+1] � \\ {M −1} ⊂ ρ � (Li + MId)−1� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' When the Dirichlet boundary value problem is well-posed, M −1 ∈ ρ � (Li + MId)−1� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The resolvent is open, thus there exists some ℵ2 [i1,··· ,iN+1] > 0 such that B � M −1, ℵ2 [i1,··· ,iN+1] � ⊂ ρ � (Li + MId)−1� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS 21 Now define ℵ = min i1,··· ,iN+1∈{1,··· ,N+1} � ℵ1 [i1,··· ,iN+1], ℵ2 [i1,··· ,iN+1] � , and ϑ = ℵM 2 1 + ℵM .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We verify that for every κ ∈ (0, ϑ) M −1 ̸∈ σ � (Li + κ + M)−1� , which in turn means that Li + κ is well posed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' □ A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof of lemma 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Fact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' There exists some η0 > 0, such that for any 0 < η < η0, and any t ∈ � 1 2, 1 � there holds (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1) ∀u ∈ H1 0 � U t� , ⟨Lku, u⟩H−1(Ut),H1 0 (Ut) ≥ 1 3λ ∥u∥2 H1 0 (Ut) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Indeed, we have, for any t > 0, ⟨Lku, u⟩ = � Ut A∇u · ∇u + u (b + c) · ∇u + qu2dx ≥ λ ∥∇u∥2 L2(Ut) − 2λ−1 � Ut |∇u| |u| dx − λ−1 ∥u∥2 L2(Ut) ≥ λ 2 ∥∇u∥2 L2(Ut) − λ2 + 2 λ3 ∥u∥2 L2(Ut) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2) To address the lower order term we rely on lemma 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Since U t = ψ−1 k � B 1 t \\ B 1 t (1−η) � , by changing variables, lemma 22 shows that for any u ∈ H1 0 (U t) there holds (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='3) ∥u∥2 L2(Ut) ≤ C η2 t2 ∥∇u∥2 L2(Ut) ≤ 4Cη2 ∥∇u∥2 L2(Ut) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Combining equation (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='2) and equation (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='3), we have ⟨Lu, u⟩H−1(Ut)×H1 0 (Ut) ≥ �λ 2 − 4C λ2 + 2 λ3 η2 � ∥∇u∥2 L2(Ut) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Choosing η > 0 small enough there holds for all t ∈ � 1 2, 1 � , ⟨Lu, u⟩H−1(Ut)×H1(Ut) ≥ 1 3λ ∥∇u∥2 L2(Ut) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' □ Lemma 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Write Br for the ball centred at the origin of radius r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Given 0 < r2 < r1 , for any s and t such that r1 < t < s < r2, there holds ∀u ∈ H1 0 (Bs \\ Bt) ∥u∥2 L2(Bs\\Bt) ≤ c (s − t)2 ∥∇u∥2 L2(Bs\\Bt) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' for some constant c, which depends on r1 and r2 only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Consider the Dirichlet eigenvalue problem in Bs \\ Bt \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 △u = ρstu in Bs \\ Bt u = 0 on ∂Bs u = 0 on ∂Bt We note that the first eigensolution is radial, u = fst (|r|) , and fst ∈ C∞ ((t, s)) satisfies 1 rd−1 ∂r � rd−1∂rfst � = ρ1 stf in (t, s) fst (s) = fst (t) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' 22 YVES CAPDEBOSCQ AND TIANRUI DAI By the change of variable r → r2−r1 s−t (r − t) + r1, we find that fst (r) = fr2r1 � r1−r2 s−t (r − t) + r2 � , and ρ1 st = � r2−r1 s−t �2 ρ1 r2r1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' ρ1 st = inf u∈H1 0 (Bs\\Bt) u̸=0 � Bs\\Bt ∇u · ∇udx � Bs\\Bt u2dx = inf u∈H1 0 (Bs\\Bt) u̸=0 ∥∇u∥2 L2(Bs\\Bt) ∥u∥2 L2(Bs\\Bt) , We conclude that ∥u∥2 L2(Bs\\Bt) ≤ � ρ1 r1r2 �−1 � s−t r1−r2 �2 ∥∇u∥2 L2(Bs\\Bt) for every u ∈ H1 0 (Bs \\ Bt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' □ A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof of lemma 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We have \uf8eb \uf8ec \uf8ed Jf (u1, F) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Jf (uP, F) \uf8f6 \uf8f7 \uf8f8 = J (u1, · · · , uP )T T (x, Ed+1,df1, · · · , Ed+1,dfd⋆, ed+1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to proposition 8 there holds rank (f1, · · · , fd⋆) = d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Furthermore (Ed+1,df1, · · · , Ed+1,dfd⋆) ∩ Red+1 = {0} , thus proposition 9 shows that rank (T (x, Ed+1,df1, · · · , Ed+1,dfd⋆, ed+1)) = d + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Since {u1, · · · , uP } ∈ A (P), we have rank J (u1, · · · , uP)T = d + 1 at every x, thus rank Fx = d + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Note that ∀a ∈ RP −1 rank Pa = P − 1 thus for every x, we have: rank Pa ◦ Fx ≤ min (rank Pa, rankFx) ≤ d + 1 and rank Pa ◦ Fx ≥ rank Pa + rank Fx − P = d If a ∈ RP −1 \\ G, then there exists x ∈ Ω, such that rank Pa ◦ Fx = d ⇐⇒ dim ker(Pa ◦ Fx) = d⋆ + 1 − d (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='4) ⇐⇒ dim F −1 x (span {(a1, · · · , aP −1, 1)}) = d⋆ + 1 − d We have the implication a ∈ RP −1 \\ G =⇒(a1, · · · , aP −1, 1) ∈ ∪xIm (Fx).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Conversely, if (a1, · · · , aP −1, 1) ∈ ∪xIm (Fx) then there exists x ∈ Ω \\ ∪i,jΓij and va ∈ Rd⋆+1 such that Fxva = (a1, · · · , aP −1, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thus, Rva ⊕ ker (Fx) ⊂ F −1 x (span {(a1, · · · , aP −1, 1)}) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Even though the choice of va is arbitrary, any other choice would be in Rva ⊕ ker (Fx), thus Rva ⊕ ker (Fx) = F −1 x (span {(a1, · · · , aP −1, 1)}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Note that since rank (Fx) = d + 1, dim (ker (Fx)) = d⋆ − d, therefore dim (Rva ⊕ ker (Fx)) = d⋆ + 1 − d, which from equation (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='4) implies that rank Pa ◦ Fx = d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In conclusion, we have Pa ◦ Fx has rank d + 1⇐⇒ (a1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' aP −1, 1) ̸∈ ∪xIm (Fx).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Set B = ∪xIm (Fx) ∩ � b ∈ RP |bP = 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The identity RP −1 \\ G = PP −1,P (B) therefore holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We now follow the argument in [AC22, Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1] and [GW75] and deduce that Hk−1 (B) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The conclusion is attained as the P − 1-Hausdorff measure equals to the P − 1-Lebesgue measure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' □ Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof of lemma 20 When d ̸∈ {2, 4, 8} it is impossible to find a group of continuous vector fields family {h1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' , hd} such that for every x ∈ ∂B1, there holds (1) h1 (x) = x , (2) ⟨hi, hj⟩ (x) = δij for i, j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' JACOBIAN CONSTRAINTS WITH DISCONTINUOUS COEFFICIENTS 23 In odd dimensions, this is a consequence, of the so-called Hairy ball theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' In general, the following result is proved in [Ker58] and [BM58].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' There exists trivial bundle of Sd−1 if and only if d = 2, 4 or 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Moreover, when d ∈ {2, 4, 8} there exists {h2, · · · , hd} ∈ � C1 � Sd−1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Rd��d−1 such that (h1, · · · , hd) ∈ SOd � Sd−1� where SOd refers to the real unitary matrices with positive determinant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Explicit examples are : (1) When d = 2, ∀ (x1, x2) ∈ ∂B1, set h1 = (x1, x2) and h2 = (−x2, x1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' (2) When d = 4, ∀ (x1, x2, x3, x4) ∈ ∂B1, set h1 = (x1, x2, x3, x4) , h2 = (−x2, x1, −x4, x3) , h3 = (x3, −x4, −x1, x2) , h4 = (x4, x3, −x2, −x1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' (3) When d = 8, ∀ (x1, x2, x3, x4, x5, x6, x7, x8) ∈ ∂B1 set h1 = (x1, x2, x3, x4, x5, x6, x7, x8) , h2 = (−x2, x1, −x4, x3, −x6, x5, x8, −x7) , h3 = (−x3, x4, x1, −x2, −x7, −x8, x5, x6) , h4 = (−x4, −x3, x2, x1, −x8, x7, −x6, x5) , h5 = (−x5, x6, x7, x8, x1, −x2, −x3, −x4) , h6 = (−x6, −x5, x8, −x7, x2, x1, x4, −x3) , h7 = (−x7, −x8, −x5, x6, x3, −x4, x1, x2) , h8 = (−x8, x7, −x6, −x5, x4, x3, −x2, x1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The second part of lemma 20 follows from the following proposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' There exists h2, · · · , hd+1 in � C1 � Sd−1, Rd��d such that ⟨hi, x⟩ = 0, for i = 2, · · · , d + 1 and rank (x, h2, · · · , hd+1) = d on Sd−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For every x ∈ Sd−1 ⊂ Rd, we denote x = (x1, x2, · · · , xd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Set hi = (x1xd+2−i − δ1,d+2−i, · · · , xdxd+2−i − δd,d+2−i) , where δi,j is the Kronecker symbol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We have ⟨hi, x⟩ = ��d j=1 x2 jxd+2−i � − xd+2−i = 0, for i ≥ 2, thus each hi is tangent to Sd−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Take (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='1) Hd = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ed h1 1 h2 xd .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' hd+1 x1 \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f8 (d+1)×(d+1) , that is, Hd = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed x1 x2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' xd 1 x1xd x2xd .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' x2 d − 1 xd .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' x1x2 x2 2 − 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' xdx2 x2 x2 1 − 1 x1x2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' xdx1 x1 \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' There holds rank Hd = d + 1, for d ≥ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' The proof is by induction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' When d = 2, we compute det H2 = −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' When d ≥ 3, we have det Hd = ����������� x1 x2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' xd 1 0 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' −1 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' x1x2 x2 2 − 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' xdx2 x2 x2 1 − 1 x1x2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' xdx1 x1 ����������� = (−1)d+1 det Hd−1 = .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' = (−1) d(d+3) 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thus, we have rank Hd = d + 1, which implies rank (h1, · · · , hd+1) = d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We modify hd+1 → hd+1 (−1) d(d+3) 2 and modify the last line of Hd to be � hd+1 (−1) d(d+3) 2 , (−1) d(d+3) 2 x1 � to obtain Hd ∈ SOd+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' □ 24 YVES CAPDEBOSCQ AND TIANRUI DAI Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Proof of theorem 6 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We reproduce the proof given in [AC22] with the necessary adaptations for the reader’s convenience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to theorem 7, and in turn equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='7), there exists a large P0 such that A (P0) ̸= ∅.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Write P ⋆ = � d+d⋆+1 α � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Take h ∈ H (Ω)P ⋆ , namely h = (h1, · · · , hP ⋆).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Take u1, · · · , uP0 ∈ A (P0) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Then (h1, · · · , hP ⋆, u1, · · · , uP0) ∈ A (P0 + P ⋆), and for x ∈ Ω \\ ∪i̸=jΓij, rank J (h1, · · · , hP ⋆, u1, · · · , uP0) (x) = d + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to lemma 11, for a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='e aP0+P ⋆−1 ∈ RP0+P ⋆−1 , there holds rank J � h1 − aP0+P ⋆−1 1 uP0, · · · , hP ⋆ − aP0+P ⋆−1 P ⋆ uP0, · · · , uP0−1 − aP0+P ⋆−1 P0+P ⋆−1uP0 � (x) = d + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Repeating this reduction P0 times, for a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='e aT = � aT 1 , · · · , aT T � ∈ RT , where T = (P ⋆, · · · , P0 + P ⋆ − 1), there holds rank J � h1 − P0+P ⋆−1 � T =P ⋆ aT 1 uT −P ⋆+1, · · · , hP ⋆ − P0+P ⋆−1 � T =P ⋆ aT P ⋆uT −P ⋆+1 � (x) = d + 1, which means haT = � h1 − �P0+P ⋆−1 T =P ⋆ aT 1 uT −P ⋆+1, · · · , hP ⋆ − �P0+P ⋆−1 T =P ⋆ aT P ⋆uT −P ⋆+1 � ∈ A (P ⋆).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For any ε > 0, taking aT small enough, since u1, · · · , uP0 are bounded in H (Ω), we conclude that ∥h − haT ∥H(Ω)P ⋆ ≤ ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' We then prove that A (P ⋆) is an open set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' For any x ∈ Ω, u = (u1, · · · , uP ⋆) ∈ H (Ω)P ⋆ , we define Det : Ω × H (Ω)P ⋆ → R given by Det (x, u) := P ⋆ � i1,··· ,id+1=1 det ��� Jf � ui1, · · · , uid+1 �� (x) �� Thanks to the continuity and the multi-linearity of Jf, Det (x, u) is continuous for every x ∈ Ω, u = (u1, · · · , uP ⋆) ∈ H (Ω)P ⋆ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Take u ∈ A (P ⋆), for every x ∈ Ω, there holds Det (x, u) > 0 Therefore, there exists some constant C > 0 such that infx∈ΩDet (x, u) ≥ C > 0 Take ε > 0 small enough and v = (v1, · · · , vP ⋆) ∈ H (Ω)P ⋆ such that ∥u − v∥H(Ω)P ⋆ = P ⋆ � i=1 ∥ui − vi∥H(Ω) ≤ ε and Det (x, v) ≥ C 2 > 0, which implies rank (Jf (v1, · · · , vP ⋆)) = d + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' Thanks to the relation between J and Jf, we conclude that rank (J (v1, · · · , vP ⋆)) = d + 1 which implies v ∈ A (P ⋆).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content=' □ Université Paris Cité, CNRS, Sorbonne Université, Laboratoire Jacques-Louis Lions (LJLL), F- 75006 Paris, France Email address: yves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='capdeboscq@u-paris.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='fr Email address: tianrui.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='dai@etu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='u-paris.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} +page_content='fr' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/Z9AzT4oBgHgl3EQfm_3D/content/2301.01574v1.pdf'} diff --git a/_9FQT4oBgHgl3EQfLzUR/content/tmp_files/2301.13265v1.pdf.txt b/_9FQT4oBgHgl3EQfLzUR/content/tmp_files/2301.13265v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..bf8169eece66b67009738bde1c94a50c6f614373 --- /dev/null +++ b/_9FQT4oBgHgl3EQfLzUR/content/tmp_files/2301.13265v1.pdf.txt @@ -0,0 +1,737 @@ +arXiv:2301.13265v1 [hep-th] 30 Jan 2023 +A New Gravitational Action For The Trace Anomaly +Gregory Gabadadze +Center for Cosmology and Particle Physics, Department of Physics +New York University, 726 Broadway, New York, NY, 10003, USA +Abstract +The question of building a local diff-invariant effective gravitational action for the +trace anomaly is reconsidered. General Relativity (GR) combined with the existing ac- +tion for the trace anomaly is an inconsistent low energy effective field theory. This issue +is addressed by extending GR into a certain scalar-tensor theory, which preserves the +GR trace anomaly equation, up to higher order corrections. The extension introduces +a new mass scale – assumed to be below the Planck scale – that governs four high di- +mensional terms in a local diff-invariant trace anomaly action. Such terms can be kept, +while an infinite number of Planck-suppressed invariants are neglected. The resulting +theory maintains two derivative equations of motion. In a certain approximation it +reduces to the conformal Gallileon, which could have physical consequences. + +1 +Introduction and summary +General Relativity (GR), combined with a quantum field theory exhibiting the gravitational +trace anomaly [1], and described by its effective action [2,3], is an inconsistent field theory, +despite the existence of a local diff-invariant trace anomaly action (see, [4,5], and discussions +below). Such a theory is strongly coupled at an arbitrarily low energy scale, as was shown +in [6] in a different context. The inconsistency stems from the fact that the conformal part of +a metric is not a propagating degree of freedom in classical GR1. This issue is fully relevant +to GR coupled to the Standard Model (SM) of particle physics. +One way to avoid the inconsistency is to augment GR so that the conformal mode turns +into a proper propagating degree of freedom, without spoiling observational consequences of +GR, as it is proposed in this work. An alternative to the augmentation of GR would be to +cancel the trace anomaly by introducing additional low energy degrees of freedom. +As a reminder, one is concerned with the invariance of a quantum field theory (QFT) – +such as the SM – with respect to global scale transformations. At the classical level the scale +invariance could be exact, as in the Maxwell theory, or it could be violated explicitly, as in a +massive scalar theory, or in GR. The quantum theory violates the scale invariance generically, +irrespective of whether the classical theory is or is not scale invariant. This violation appears +in the trace of a stress-tensor, as it was first shown for gauge fields [7,8], and subsequently +for a gravitational field [1]. The latter will be the focus on this work. +It is useful to build a low energy effective action that would incorporate quantum loops of +the matter fields (see, [9] and references therein). The variation of such a quantum effective +action would give rise to the equations of motion which capture the trace anomaly [2, 3]. +These equations could then be solved in various physical settings, notably in astrophysics +and cosmology, with a benefit that the solutions would automatically contain quantum effects +due to the trace anomaly. +Riegert constructed a local effective action, SA, which captures the trace anomaly in its +equations of motion [2]. This construction was done for a particular class of constrained +fields, and hence was regarded by Riegert as breaking diff-invariance. +Komargodski and Schwimmer [4] showed that the same functional, SA, but written in +1A propagating degree of freedom is defined as a mode with proper quadratic time derivative and spatial +gradient terms in Minkowski space-time, which is not removable by gauge transformations or by field redef- +initions, nor it is restricted by constraints. The scale factor of the Fridmann-Lemaitre-Robertson-Walker +(FLRW) metric is a conformal mode which has a ghost-like ("wrong sign") quadratic time derivative term in +the GR action. This "wrong-sign" is crucial for FLRW cosmology. However, the scale factor is not a propa- +gating degree of freedom because its spatial dynamics is constrained within GR; had it been a propagating +mode, it would have led to unsurmountable instabilities because of its ghost-like quadratic time derivative +term. The GR action supports only two propagating modes, the helicity ±2 states of a massless graviton, +often referred as the tensor modes. +2 + +terms of unconstrained fields, gives a diff-invariant trace anomaly action. This finding placed +the Riegert action on a solid footing, which it lacked for many years. Moreover, ref. [5], +showed that the Riegert action emerges as a local diff-invariant Wess-Zumino term in a coset +for a non-linearly realized conformal symmetry, broken by the scale anomaly. +These findings offer an important perspective: the GR action depends on the metric g +and its derivatives. The metric g can formally be decomposed as g = e2σ¯g, and GR can +be viewed as a theory non-linearly realizing a Weyl symmetry that shifts σ and conformally +transforms ¯g, keeping g invariant. In GR this is a "spurious" symmetry since the above split +of g is arbitrary, and the σ field can be gauged away by the very same Weyl transformations. +However, the Riegert action is a local diff-invariant functional of σ and ¯g which is not +invariant under the "spurious" Weyl transformations [4]. Thus, the local diff-invariant action +containing the Einstein-Hilbert and Riegert terms, both written in terms of σ and ¯g, can be +viewed as an action non-linearly realizing the "spurious" Weyl symmetry [5]. +The above arguments, however, suggests that something must be wrong: the field σ was +"spurious" in GR, but becomes unremovable once GR is supplemented by the Riegert action. +Indeed, in the GR action the metric field can absorb the kinetic term of σ, rendering it in +the Riegert action infinitely strongly coupled at arbitrarily low energies [6] 2. +One way to resolve this problem is to augment the classical GR action and only then +couple it to a quantum field theory. I will show that the following action +SR− ¯R = M2 +� +d4x√−g R − ¯ +M2 +� +d4x√−¯g ¯R , +(1) +with ¯R ≡ R(¯g), M = MPl/ +√ +2 >> ¯ +M, and the two metric tensors connected as +gµν = e2σ ¯gµν , +(2) +gives a viable low energy theory of gravity with a proper propagating conformal mode, that +can be consistently coupled to the Riegert action. +The "spurious" Weyl transformation, +σ → σ + β(x) , ¯g → e−2β(x)¯g, with an arbitrary β(x), would have made σ gauge-removable +in GR, however, this is not a symmetry of the second term in the action (1), and therefore σ +can’t be gauged away. Furthermore, the action (1) possesses a global symmetry that neither +the first nor the second term on the r.h.s. separately has (Section 2). +2Komargodski and Schwimmer constructed the local diff-invariant Riegert action, and combined it with +GR to prove the a-theorem (for an earlier work using a non-local Riegert action for the a-theorem, see [10]). +They used the metric field for symmetry bookkeeping, but its dynamics was unimportant for the proof itself; +hence the metric was frozen, and only the conformal mode (a dilaton) was utilized [4]. The issue discussed +here does not affect the Komargodski-Schwimmer proof of the a-theorem, since their construction does not +require a dynamical tensor field [4]. See more comments in Section 4. +3 + +Combining (1) with the results of [2–4], the total effective action that captures the GR +trace anomaly equation reads as follows: +Seff = SR− ¯R + SA(σ, ¯g) , +(3) +where SA has the form [2–4] +SA = a +� +d4x√−¯g +� +σ ¯E − 4 ¯Gµν ¯∇µσ ¯∇νσ − 4( ¯∇2σ)( ¯∇σ)2 − 2( ¯∇σ)4� ++ c +� +d4x√−¯gσ ¯W 2, (4) +with the Euler (Gauss-Bonnet) invariant, ¯E = ¯R2 +µναβ − 4 ¯R2 +µν + ¯R2, and the Weyl tensor +squared, ¯W 2 = ¯R2 +µναβ − 2 ¯R2 +µν + ¯R2/3. The action (4) emerges as a Wess-Zumino term in a +SO(2, 4)/ISO(1, 3) coset, which can be recast as a boundary term in 5D [5]; this distinguishes +(4) from other terms in the effective field theory. Note that the terms in SA belong to the +general class of the Horndeski theories giving rise to second order equations of motion [11]. +There are an infinite number of additional higher dimensional counter-terms supplement- +ing (3). All these terms will be suppressed by respective powers of the scale M = MPl/ +√ +2, +or of higher scales, such as M(M/ ¯ +M)2; they will be neglected. At the same time, SA retains +a finite number of terms which are suppressed by ¯ +M << M, or by certain geometric mean +scales such as ( ¯ +MM2)1/3, all of them lower than M. +It is due to the separation of scales between +¯ +M and M that one can regard (3) as a +meaningful low energy action obtained by "integrating out" quantum loops of a QFT. The +coefficients a and c depend on numbers and representations of the low energy physical degrees +of freedom [1]. The very same degrees of freedom, could also give rise to classical sources for +the gravitational field. Hence the action (3) should be supplemented by the classical action +for the fields representing those low energy degrees of freedom, but without quantizing those +fields further3. +The signature used in this work is "mostly plus", (−, +++). The Ricci tensor convention +is as follows, Rµν = ∂αΓα +µν −...; Riegert, following Ref. [9], uses the "mostly minus" signature, +and the opposite sign for the curvature tensor. Ref. [4] uses the "mostly minus" signature, +the curvature convention opposite to Riegert’s, and their τ and g are, respectively, −σ and +¯g here. The actions in refs. [2], [4], and in eq. (4) agree with each other after these different +conventions are taken into consideration. +3Not all quantum loops are proportional to the positive powers of ℏ when massive fields are involved, +some classical effects emerge from such loops [12]. Hence, one might worry that without considering the +quantum loops some classical effects would be lost. However, keeping the respective classical fields in the +effective action would enable one to retain those classical effects via nonlinear classical perturbation theory +(or via exact classical or numerical solutions). +4 + +2 +The R − ¯R theory +Consider the action already quoted in the previous section +SR− ¯R = M2 +� +d4x√−g R − ¯ +M2 +� +d4x√−¯g ¯R , +(5) +with the two metric tensors, g and ¯g, related to one another by (2). At the classical level, +the above can be viewed as an action for one metric field – say, the metric g – and for the +scalar σ, which gets its proper-sign kinetic term from the ¯R term4. +Let us consider metric fluctuations above a flat background, hµν = gµν − ηµν, and decom- +pose them in a standard fashion according to the representations of the 3D rotation group +(these approximate well fluctuations about an arbitrary nonsingular classical background at +length scales much shorter than the characteristic curvature radius of the background): +h00 = 2n , +h0j = vj + ∂ju , +hij = tij + ∂iwj + ∂jwi + 2∂i∂jρ − 2δijτ , +(6) +where i, j = 1, 2, 3, vj and wj are transverse three-vectors, tij is a transverse–traceless tensor, +and τ is the conformal mode. Furthermore, one can choose the gauge, u = 0, ρ = 0. Then, +the scalar part of the action (5) – which decouples from the tensor and vector parts – equals +to the space-time integral of the following expression: +2M2 � +−3 ˙τ 2 + (∂jτ)2 − 2n∂2 +j τ +� +− 2 ¯ +M2 � +−3( ˙τ + ˙σ)2 + (∂j(τ + σ))2 − 2(n + σ)∂2 +j (τ + σ) +� +. (7) +If the terms proportional to ¯ +M2 were absent, the variation of (7) w.r.t. n would have given a +constraint, ∂2 +j τ = 0, rendering the conformal mode, τ, non-propagating. This however is no +longer the case in (7): its variation w.r.t. n gives another constraint, ∂2 +j τ = ǫ2∂2 +j σ/(1 − ǫ2), +that relates the conformal mode τ to the scalar σ, τ = ǫ2σ/(1 − ǫ2) (here, ǫ = ¯ +M/M << 1, +and a zero mode of the Laplacian has been removed by choosing the appropriate spatial +boundary conditions.) Substituting the latter into (7), one gets +− 6M2(1 − ǫ2) +ǫ2 +(∂µτ) (∂µτ) = − 6 ¯ +M2 +1 +(1 − ǫ2) (∂µσ) (∂µσ) , +(8) +which is the kinetic term for the conformal mode with the proper sign. This is the key feature +of the R − ¯R theory. +4This would correspond to the Einstein frame. Section 4 will instead regard this action as a functional +of ¯g and σ, corresponding to the Jordan frame. +5 + +It is convenient to rewrite (5) as follows: +SR− ¯R = +� +d4x√−g +� +M2 R(g) − Φ2 R(g) − 6 (∇Φ)2� +, +(9) +where Φ ≡ +¯ +Me−σ, and the covariant derivative, ∇, is that of the metric g. +Owing to +the choice of the sign of the second term on the right hand side of (5), the kinetic term +for the scalar Φ has the proper sign in (9); this sign will remain the same after complete +diagonalization of the action (9), as shown in Appendix. +The scale ¯ +M enters the action (9) only through the vacuum expectation value (VEV), +⟨Φ⟩ = ¯ +M. Since ¯ +M << M, this VEV is within the realm of the effective field theory. The last +two terms in (9) non-linearly realize a Weyl symmetry which transforms g, and is explicitly +broken by the Einstein-Hilbert term. +It is straightforward to check that the action (9) is invariant w.r.t. the following trans- +formations of the fields +Φ → Φ + ω M (1 − (Φ/M)2) +1 + ω Φ/M +, +gµν → gµν +(1 + ω Φ/M)2 +1 − ω2 +, +(10) +where ω ≡ tanh(λ), with λ being an arbitrary constant5. +Furthermore, there exists an invariant combination of the metrics g and ¯g +ˆgµν = gµν − ǫ2 ¯gµν = gµν +� +1 − Φ2 +M2 +� +, +(11) +and if a QFT coupled to ˆg in a conventional manner, LSM = −1 +4 +√−ˆg ˆgµνˆgαβFµαFνβ + · · ·, +then it will also be invariant. Since one should require, ǫ ≡ ( ¯ +M/M) << 1, the coupling to +the QFT is approximated by the coupling to the metric gµν. The resulting classical equations +approximate GR, as long as ¯ +M << M 6. +5The symmetry transformations (10) may look mysterious, but their essence is simple: in Appendix it +is shown that certain non-linear conformal transformations of g and Φ, bring the action (9) to that of GR +coupled to a massless scalar field kinetic term, which is invariant w.r.t. +the shift symmetry. +The shift +transformation, once rewritten in terms of the original variables, gives (10). +6One could consider a different approach in which QFT fields would couple to g, instead of ˆg, thus +breaking the symmetry (10). This would result in new terms generated in the effective action due to the +QFT loops, notably the mass term for the σ field would be induced. The latter would be proportional to +some positive power of the UV scale of the QFT, suppressed by powers of M. Furthermore, σ would couple +to the stress-tensor in the linearized approximation, providing a gravity-competing force at distances smaller +than inverse mass of σ. One would then need to impose a constraint on ¯ +M to suppress the coupling of σ to +the stress-tensor. While this is a logical possibility, it would also lead to the trace anomaly equation being +modified as compared with that of GR (such modifications can be kept small by imposing constraints on +¯ +M). The present work focuses on a scenario where the symmetry (10) is preserved by the QFT coupling, +but can be adopted to the case when the matter couples to g. +6 + +The requirement of the invariance under (10) prohibits certain terms to be added to the +action (9). For instance, an additional kinetic term for Φ, the cosmological constant, √−g, +the quadratic mass term, √−gΦ2, and the quartic term, √−gΦ4, don’t respect the symmetry +(10), and are prohibited to enter the action with arbitrary coefficients. That said, there is a +particular combination of the latter three terms which is invariant under (10) +± Λ4√−g +� +1 − Φ2 +M2 +�2 +, +(12) +with Λ being some arbitrary dimensionful constant. If the above term is included in the +action then both the quadratic and quartic terms for Φ will be connected to the cosmological +constant. The cosmological constant Λ will be tuned to zero (or be vanishingly small as +compared to ¯ +M) and this will also nullify the quadratic and quartic terms in (12). +The term in (12) is nothing other than ±Λ4√−ˆg. One could also consider the terms, +√−ˆg ˆR, and √−ˆg( ˆ∇Φ)2/(1 − Φ2/M2)2, to be added to the action (9), with arbitrary coeffi- +cients; these terms would individually respect the symmetry (10). However, only one linear +combination of the above two terms retains the structure of the action (1) intact, which is +necessary to preserve the same trace anomaly equation as one gets in GR (see Section 4). +Therefore, in addition to the fine tuning of the cosmological constant, one needs to adopt +another fine tuning of the relative coefficients between the above two terms7. +The action (1), and the relation (2), are invariant under the following "duality" transfor- +mations: M ↔ ¯ +M, g ↔ −¯g, σ → −σ. The latter leads to ˆg → ǫ−2ˆg, rendering the following +three terms invariant, M4√−ˆg, M2√−ˆg ˆR and √−ˆg( ˆ∇Φ)2/(1 −Φ2/M2)2; furthermore, cou- +pling of ˆg to massless scalars, spinors, and vector fields can be made invariant by inserting +appropriate powers of M in front of their kinetic terms. Therefore, the above "duality sym- +metry" does not help to avoid the need for the fine tuning of the two parameters discussed +above. +The equations of motion that follow from (9) read: +(M2 +− +Φ2) +� +Rµν − 1 +2gµνR +� ++ +� +∇µ∇ν − gµν∇2� +Φ2 = 6 +� +∇µΦ∇νΦ − 1 +2gµν(∇Φ)2 +� +, +− +ΦR + 6∇2Φ = 0 , +(13) +where one can easily include the matter stress-tensor on the r.h.s. of the first equation. It is +straightforward to show that a flat FLRW solution of GR is also a solution of (13), amended +by the matter stress-tensor. +7One could relax this tuning somewhat to obtain the trace anomaly equation that would slightly differ +from that of GR; this is easy to do but there is no urgency to pursue such extensions. +7 + +3 +Toward the quantum effective theory +In general, one could quantize (9) as an effective low energy action (see [13], and references +to earlier literature). However, there is no need for a general approach here. Instead, one +could regard gravity as a dynamical classical field coupled to a QFT [9]. This is justified at +energy scales much below the Planck scale. +Quantum loops of the QFT will generate an infinite number of counter-terms that one +needs to include in the effective action. To deal with the loop divergencies one could use, +e.g., dimensional regularization. +Due to the symmetry (10) of the action (9) and of its coupling to a QFT, all the counter- +terms should also be invariant under (10)8. +Hence, one could identify all the symmetry +preserving counter-terms by writing all possible invariants in terms of the metric ˆg, expressed +in terms of g and Φ. +Furthermore, the metric ˆg approximately equals to g, up to the +corrections of the order of 1/M2, hence, one could just use g as an approximation. The goal +of this section is to understand at what energy scales are those counter terms significant. +To achieve this goal, let us introduce dimensionful fields +Hµν = M (gµν − ηµν), +Σ = ¯ +M − Φ , +(14) +and consider two different limits of the theory (9). +First, consider the limit M → ∞, while ¯ +M is fixed, and H and Σ are finite. It is then +straightforward to see that the Lagrangian in (9) reduces to a free field theory of H and Σ +−1 +4 HµνGµν(H) − 6(∂Σ)2 , +(15) +where Gµν(H) denotes the linearized Einstein tensor. The coupling to the stress-tensor, Tµν, +is proportional to HµνT µν/M, up to additional corrections of the order of O +� ¯ +M2/M3� +, and +also vanishes in the limit as long as the stress-tensor is held finite. These considerations +show that the loop-generated counter-terms in the full nonlinear theory should vanish in the +M → ∞ limit. +Second, consider the limit M → ∞, ¯ +M → ∞, with ǫ = ( ¯ +M/M) << 1 being fixed; the +8I will neglect non-perturbative quantum gravity effects due to, e.g., black holes or worm holes, which are +expected to violate global symmetries, and in particular to violate (10). Such violations are exponentially +suppressed at low energies in the quasi-classical approximation [14]; they should be contrasted to the violation +due to the trace anomaly, which is suppressed only by the powers of the scale ¯ +M, and will be included in +the effective action. +8 + +fields H and Σ are held finite in this limit, too. The resulting Lagrangian reads +−1 − ǫ2 +4 +HµνGµν(H) + 2ǫ Σ RL(H) − 6(∂Σ)2 , +(16) +where RL(H) denotes the linearized Ricci scalar, and the coupling to the matter stress-tensor +is proportional to HµνT µν/M, up to the additional corrections of the order of O +� ¯ +M2/M3� +. +The expression (16) represents a free field theory of a kinetically mixed tensor and scalar +fields, and can easily be diagonalized by a linear conformal transformation of Hµν. Therefore, +the counter terms of the full nonlinear theory ought to vanish in the second limit, too. The +latter condition is more restrictive than the one obtained from the first limit above. It implies +that there will not exist counter-terms proportional to ¯ +Mp/Mq, with p ≥ q. In particular, +the following counter terms +¯ +M2 +� ¯ +M +M +�n √−g Σ2, +� ¯ +M +M +�m √−g Σ4, +(17) +will be absent for arbitrary integers n ≥ 0 and m ≥ 0 (i.e., such terms will not be generated +with n ≥ 0 and m ≥ 0 even if they form combinations that are invariant under (10)). +Let us now summarize the results of the above discussions in terms of the fields of the +action (9). One concludes that the counter-terms in (9) will be proportional to +R3 +M2, +R□R +M2 , +Σ2R2 +M2 , +Σ2R□R +M4 +, ... +(18) +Each of these counter-terms will have corrections that make them invariant under (10), +however, each of these corrections are higher order in 1/M2. +All of these counter terms are suppressed by M. Furthermore, due to the VEV, ⟨Φ⟩ = ¯ +M, +some of the higher dimensional operators will end up depending on ¯ +M, too: +¯ +M2R2 +M2 , +¯ +M2R□R +M4 +... +(19) +The lowest scale that suppresses the latter operators is M(M/ ¯ +M), which is significantly +higher than M, thanks to ¯ +M << M. +All the arguments above apply to conventional local counter-terms encountered in the +conventional perturbative series expansion in powers of ∇2/M2 [13]. +This however says +nothing about possible non-local terms that may arise from the loops. Due to their nonlocal +nature such terms could remain significant even when ∇2 << M2. +The trace anomaly +introduces precisely such terms, which can then be rewritten as local term but at the expense +9 + +of introducing a new field, σ. The new terms in the effective action are suppressed by the +scale much smaller than M. Hence, it is meaningful to keep the trace-anomaly induced +terms, but ignore all the other conventional higher dimensional counter terms suppressed by +M and higher scales, as done below. +4 +The effective action +The scale anomaly in the trace of a massless QFT stress-tensor reads [1]: +T µ +µ = aE(g) + cW 2(g) + b□R(g) , +(20) +where the coefficients a and c depend on the field content of the theory, while b is arbitrary, +and for that reason will not be included9. The goal is to find an effective action, ˜SA(g), +which incorporates the loop effects of the QFT so that the trace of its metric variation gives +(20) +T µ +µ = gαβ +2 +√−g +δ ˜SA(g) +δgαβ += aE(g) + cW 2(g) . +(21) +Riegert [2], argued that ˜SA(g) cannot be written as a local diff-invariant functional if one +uses only one field, g. Yet, the variation of the total action, SGR(g) + ˜SA(g), should give rise +to a local equation for the trace anomaly, with the r.h.s. defined by (21). +To find the action, Riegert introduced a decomposition, gµν = e2σ ¯gµν, with the metric ¯g +restricted to have a fixed determinant [2]. Using this decomposition, Riegert constructed a +local action, SA(¯g, σ), such that the variation of SGR(e2σ¯g)+SA(¯g, σ) w.r.t. σ gives the trace +anomaly equation. The functional SA(¯g, σ) was regarded in [2] as breaking diff-invariance +since the determinant of ¯g had been fixed [2]. +It is however straightforward to restore diff-invariance in SA(¯g, σ): one could use exactly +the same action, SA(¯g, σ), but declare that the restriction on the determinant of ¯g has been +lifted. Conversely, such a diff-invariant SA(¯g, σ) would yield back Riegert’s original local but +non-invariant SA(¯g, σ), after gauge-fixing the determinant of ¯g. +Logically, Komargodski and Schwimmer [4] reconstructed Riegert’s action SA(¯g, σ), as a +functional of two fields, σ and ¯g, without assuming any constraint on ¯g, only requiring that +the action reproduce the trace anomaly upon simultaneous Weyl transformations of σ and +¯g. +9Both, b, and the gauge field trace anomaly [7,8], can easily be included in the effective action [2]. +10 + +Furthermore, the same SA(¯g, σ) was obtained as a Wess-Zumino term in a coset construc- +tion for non-linearly realized conformal symmetry [5]. +The above facts suggest that the local and invariant Riegert action, SA(¯g, σ), given in (4), +should perhaps play a more fundamental role than it did in Riegert’s construction, where it +was merely used as an intermediate step toward motivating a non-local action10. +To this end, one could add the action (4) to the GR action expressed in terms of σ and +¯g, SGR(e2σ¯g), and treat both σ and ¯g as dynamical fields without assuming any constraint +on ¯g. This theory, however, describes an infinitely strongly coupled system when restricted +to its Minkowski background [6]. It is a strongly coupled theory with an unacceptably low +strong coupling scale on any small curvature background. This is so because the theory, +SGR(e2σ¯g) + SA(¯g, σ), does not contain a kinetic term for σ. An apparent kinetic term for σ +in SGR(e2σ¯g) can be removed by a field redefinition of the metric, ¯g = e−2σg, rendering the σ +field endowed with nonlinear interactions in SA(g, σ), but without a quadratic kinetic term. +Such a theory is intractable as an effective field theory11. +To bring some normalcy, one needs to introduce a kinetic term for σ. However, adding +in any new term explicitly depending on σ – besides the ones already present in (4) – would +spoil the recovery of the correct trace anomaly. One way out is to use the R − ¯R theory as +a functional of ¯g and σ, where the new term, ¯R, does not depend on σ. +This leads one to explore a new action consisting of the R − ¯R theory plus (4): +Seff(¯g, σ) = M2 +� +d4x√−¯g e2σ � +R(¯g) + 6 ( ¯∇σ)2� +− ¯ +M2 +� +d4x√−¯g R(¯g) + +a +� +d4x√−¯g +� +σ ¯E − 4 ¯Gµν ¯∇µσ ¯∇νσ − 4( ¯∇2σ)( ¯∇σ)2 − 2( ¯∇σ)4� ++ c +� +d4x√−¯g σ ¯W 2 . +(22) +This is a local diff-invariant functional of σ and ¯g. It can be viewed as an effective action for +quantized σ and QFT 12. +The classical matter fields – not shown in (22) – couple to the metric ˆgµν = gµν(1 − +¯ +M2e−2σ/M2) ≃ gµν = e2σ¯gµν. Variation of the action (22) with respect to σ, with a subse- +10Riegert’s subsequent procedure of constructing a non-local but diff-invariant functional introduces a +four derivative term in the action [2]. This rout unavoidably leads to a negative energy state, a ghost (or to +violation to unitarity). It is also an unnecessary rout – as argued, a local and diff-invariant effective action +for the trace anomaly does exist. More general actions with four-derivative terms were explored in [15], with +some potentially interesting applications. Regretfully, these actions also have ghosts. +11This is not an issue for the work [4] insofar as ¯g is not a dynamical field and is frozen to equal to the flat +space metric; in the latter case the GR action gives a kinetic term for σ, albeit with a wrong overall sign. +This sign can be flipped to the correct one by choosing a "wrong sign" GR term to start with; this does not +cause a problem in [4] since the theory does not use the dynamics of tensor perturbations. +12Note that the Riegert action in the second line in (22) contains the Galileon terms that are not suppressed +by M. These can be understood as terms obtained by "integrating in" the σ field to make the otherwise +nonlocal anomaly action be local. In such a case, not all the terms containing σ should be suppressed by M. +Conversely, integrating out σ would lead to nonlocal terms, all of them suppressed by M. +11 + +quent substitution of ¯g = e−2σg, gives an equation that does not depend on σ +δSeff(σ, ¯g) +δσ +|¯gµν=e−2σ gµν = 0 , ⇒ 2M2R = aE(g) + cW 2(g) , +(23) +which is exactly the trace-anomaly equation. +Now that the tensor ¯g is an unconstrained dynamical field, one should vary (22) w.r.t. +¯g, too. This variation will give ten modified Einstein’s equations. One can show that the +trace of these equations does not coincide with the equation (23). Thus, there are eleven +independent equations: one equation of motion of σ, and ten equations for ten components +of the symmetric tensor ¯g. By determining σ and ¯g from these equations, one can determine +ˆg ≃ g, which is the metric experienced by the classical matter fields. The modified Einstein +equations approximate well the conventional equations as long as ¯ +M << M, and as long as +the stress-tensor is much smaller than ¯ +M4. +Alternatively, one could rewrite the ten equations for ¯g as ten equations for g, which +would also depend on σ. The σ equation, on the other hand, can entirely be kept in terms +of g (23). Thus, there would still be eleven equations for eleven unknowns, g and σ. +The theory given by (22) is strongly coupled at the scale ¯ +M. This can be seen in the flat +space expansion of (22) obtained either by taking the limit M → ∞, or by the substitution +gµν = ηµν +� +1 − Φ2/M2�−1 , +¯gµν = e−2σ � +1 − Φ2/M2�−1 ηµν. +(24) +In either case, the theory reduces to a conformal Galileon of the canonically normalized field, +π ≡ σ ¯ +M [16], with the nonlinear Galileon terms suppressed by the scale ¯ +M +Seff|¯gµν≃e−2π/ ¯ +Mηµν ≃ +� +d4x +� +−6e−2π/ ¯ +M(∂π)2 + 4a +� +−∂2π(∂π)2 +¯ +M3 ++ (∂π)2(∂π)2 +2 ¯ +M4 +� ++ . . . +� +, (25) +where the dots stand for sub-leading terms suppressed by powers of M. +Below the scale ¯ +M the full theory (22) is weakly coupled and describes three degrees of +freedom – two helicity states of a massless graviton, and one massless scalar, π ≡ σ ¯ +M. The +later becomes strongly coupled at the scale of the order of ¯ +M, as seen from the Galileon +terms in (25). The Lagrangian (25) makes it clear that the theory without the ¯ +M2√−¯g ¯R +term is untenable – taking +¯ +M → 0 leaves the nonlinear terms in (25) infinitely strongly +coupled. +The additional scalar, σ, does not couple to the stress tensor in the linearized approxima- +tion in Minkowski space, therefore there is no fifth force constraint. It can couple to matter +on curved backgrounds. Physical consequences of the action (22) will be studies elsewhere. +12 + +Acknowledgments: I’d like to thank Massimo Porrati, David Spergel, and Giorgi +Tukhashvili for useful discussions. The work was supported in part by NSF grant PHY- +2210349. +Appendix +A complete diagonalization of (9) can be done by using the following conformal transforma- +tion +gαβ = ˆgαβ +� +1 − Φ2 +M2 +�−1 +. +(26) +It brings the action for the metric ˆg to the canonical form +SR− ¯R = +� +d4x +� +−ˆg +� +M2 R(ˆg) − +6 ( ˆ∇Φ)2 +(1 − Φ2/M2)2 +� +. +(27) +The above conformal transformation is non-singular, and (27) is sensible, as long as +Φ = ¯ +Me−σ << M, +⇒ +σ >> − ln(M/ ¯ +M) . +(28) +In addition, the effective theory is valid if |σ| << 1, which is a stronger constraint. +The scalar field action in (27) can be simplified further: it can be reduced to a free field +interacting with gravity, thanks to the following non-linear field redefinition +U = +�1 + Φ/M +1 − Φ/M +�1/2 +. +(29) +The resulting action reads: +SR− ¯R = M2 +� +d4x +� +−ˆg +� +R(ˆg) − 6 (U−1 ˆ∇U)2� +. +(30) +The latter makes the symmetries of the sigma model more explicit: the Lagrangian is invari- +ant w.r.t. the rescaling +U → eλ U , +(31) +where λ is an arbitrary constant. This symmetry is the shift symmetry of the scalar field +ln(U) that has only a kinetic term in (30). +13 + +Note that ˆg and U are related to ¯g and σ through nonlinear transformations. However, the +path integral Z(ˆg) does not equal to the path integral Z(g). In other words, the nonlinear +conformal transformation from ¯g to ˆg, and the quantization procedure for the scalar and +QFT, do not commute with one another. If one were to start with (30) and combine it with +the Riegert action for ˆg, one would get an infinitely strongly coupled theory. Instead, the +Riegert action that needs to be added to (30) can be obtained from the Riegert action for +¯g, by using the respective nonlinear conformal transformation from ¯g to ˆg. The obtained +action will be strongly coupled at the scale ¯ +M. +References +[1] D. M. Capper and M. J. Duff, Phys. Lett. A 53, 361 (1975) M. J. Duff, Nucl. Phys. B +125, 334-348 (1977) S. M. Christensen and M. J. Duff, Phys. Lett. B 76, 571 (1978) +[2] R. J. Riegert, Phys. Lett. B 134, 56-60 (1984) +[3] E. S. Fradkin and A. A. Tseytlin, Phys. Lett. B 134, 187 (1984) +[4] Z. Komargodski and A. Schwimmer, JHEP 12, 099 (2011) [arXiv:1107.3987 [hep-th]]. +[5] G. Gabadadze and G. Tukhashvili, +Phys. +Rev. D 102, +no.2, +024054 (2020) +[arXiv:2005.01729 [hep-th]]. +[6] J. Bonifacio, K. Hinterbichler and L. A. Johnson, Phys. Rev. D 102, no.2, 024029 (2020) +[arXiv:2004.10716 [hep-th]]. +[7] R. J. Crewther, Phys. Rev. Lett. 28, 1421 (1972) +[8] M. S. Chanowitz and J. R. Ellis, Phys. Lett. B 40, 397-400 (1972) +[9] N.D. Birrell, P.C.W. Davies, "Quantum Fields in Curved Space", Cambridge University +Press, 1994. +[10] D. Anselmi, Annals Phys. 276, 361-390 (1999) [arXiv:hep-th/9903059 [hep-th]]. +[11] G. W. Horndeski, Int. J. Theor. Phys. 10, 363-384 (1974) +[12] B. R. Holstein and J. F. Donoghue, Phys. Rev. Lett. 93, 201602 (2004) [arXiv:hep- +th/0405239 [hep-th]]. +[13] J. F. Donoghue, Phys. Rev. D 50, 3874-3888 (1994) [arXiv:gr-qc/9405057 [gr-qc]]. +[arXiv:gr-qc/9512024 [gr-qc]]. [arXiv:1209.3511 [gr-qc]]. +14 + +[14] R. Kallosh, A. D. Linde, D. A. Linde and L. Susskind, Phys. Rev. D 52, 912-935 (1995) +[arXiv:hep-th/9502069 [hep-th]]. +[15] I. Antoniadis and E. Mottola, Phys. Rev. D 45, 2013-2025 (1992) I. Antoniadis, +P. O. Mazur and E. Mottola, Phys. Rev. D 55, 4770-4784 (1997) [arXiv:hep-th/9509169 +[hep-th]]. Phys. Rev. Lett. 79, 14-17 (1997) [arXiv:astro-ph/9611208 [astro-ph]]. E. Mot- +tola, JHEP 11, 037 (2022) [arXiv:2205.04703 [hep-th]]. +[16] A. Nicolis, +R. Rattazzi and E. Trincherini, +Phys. Rev. D 79, +064036 (2009) +[arXiv:0811.2197 [hep-th]]. +15 + diff --git a/_9FQT4oBgHgl3EQfLzUR/content/tmp_files/load_file.txt b/_9FQT4oBgHgl3EQfLzUR/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..59cb95d416013234e01cbab5ce5b9f0293e8b7ad --- /dev/null +++ b/_9FQT4oBgHgl3EQfLzUR/content/tmp_files/load_file.txt @@ -0,0 +1,412 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf,len=411 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='13265v1 [hep-th] 30 Jan 2023 A New Gravitational Action For The Trace Anomaly Gregory Gabadadze Center for Cosmology and Particle Physics, Department of Physics New York University, 726 Broadway, New York, NY, 10003, USA Abstract The question of building a local diff-invariant effective gravitational action for the trace anomaly is reconsidered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' General Relativity (GR) combined with the existing ac- tion for the trace anomaly is an inconsistent low energy effective field theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This issue is addressed by extending GR into a certain scalar-tensor theory, which preserves the GR trace anomaly equation, up to higher order corrections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The extension introduces a new mass scale – assumed to be below the Planck scale – that governs four high di- mensional terms in a local diff-invariant trace anomaly action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Such terms can be kept, while an infinite number of Planck-suppressed invariants are neglected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The resulting theory maintains two derivative equations of motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' In a certain approximation it reduces to the conformal Gallileon, which could have physical consequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 1 Introduction and summary General Relativity (GR), combined with a quantum field theory exhibiting the gravitational trace anomaly [1], and described by its effective action [2,3], is an inconsistent field theory, despite the existence of a local diff-invariant trace anomaly action (see, [4,5], and discussions below).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Such a theory is strongly coupled at an arbitrarily low energy scale, as was shown in [6] in a different context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The inconsistency stems from the fact that the conformal part of a metric is not a propagating degree of freedom in classical GR1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This issue is fully relevant to GR coupled to the Standard Model (SM) of particle physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' One way to avoid the inconsistency is to augment GR so that the conformal mode turns into a proper propagating degree of freedom, without spoiling observational consequences of GR, as it is proposed in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' An alternative to the augmentation of GR would be to cancel the trace anomaly by introducing additional low energy degrees of freedom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' As a reminder, one is concerned with the invariance of a quantum field theory (QFT) – such as the SM – with respect to global scale transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' At the classical level the scale invariance could be exact, as in the Maxwell theory, or it could be violated explicitly, as in a massive scalar theory, or in GR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The quantum theory violates the scale invariance generically, irrespective of whether the classical theory is or is not scale invariant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This violation appears in the trace of a stress-tensor, as it was first shown for gauge fields [7,8], and subsequently for a gravitational field [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The latter will be the focus on this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' It is useful to build a low energy effective action that would incorporate quantum loops of the matter fields (see, [9] and references therein).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The variation of such a quantum effective action would give rise to the equations of motion which capture the trace anomaly [2, 3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' These equations could then be solved in various physical settings, notably in astrophysics and cosmology, with a benefit that the solutions would automatically contain quantum effects due to the trace anomaly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Riegert constructed a local effective action, SA, which captures the trace anomaly in its equations of motion [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This construction was done for a particular class of constrained fields, and hence was regarded by Riegert as breaking diff-invariance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Komargodski and Schwimmer [4] showed that the same functional, SA, but written in 1A propagating degree of freedom is defined as a mode with proper quadratic time derivative and spatial gradient terms in Minkowski space-time, which is not removable by gauge transformations or by field redef- initions, nor it is restricted by constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The scale factor of the Fridmann-Lemaitre-Robertson-Walker (FLRW) metric is a conformal mode which has a ghost-like ("wrong sign") quadratic time derivative term in the GR action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This "wrong-sign" is crucial for FLRW cosmology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' However, the scale factor is not a propa- gating degree of freedom because its spatial dynamics is constrained within GR;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' had it been a propagating mode, it would have led to unsurmountable instabilities because of its ghost-like quadratic time derivative term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The GR action supports only two propagating modes, the helicity ±2 states of a massless graviton, often referred as the tensor modes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 2 terms of unconstrained fields, gives a diff-invariant trace anomaly action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This finding placed the Riegert action on a solid footing, which it lacked for many years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Moreover, ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' [5], showed that the Riegert action emerges as a local diff-invariant Wess-Zumino term in a coset for a non-linearly realized conformal symmetry, broken by the scale anomaly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' These findings offer an important perspective: the GR action depends on the metric g and its derivatives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The metric g can formally be decomposed as g = e2σ¯g, and GR can be viewed as a theory non-linearly realizing a Weyl symmetry that shifts σ and conformally transforms ¯g, keeping g invariant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' In GR this is a "spurious" symmetry since the above split of g is arbitrary, and the σ field can be gauged away by the very same Weyl transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' However, the Riegert action is a local diff-invariant functional of σ and ¯g which is not invariant under the "spurious" Weyl transformations [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Thus, the local diff-invariant action containing the Einstein-Hilbert and Riegert terms, both written in terms of σ and ¯g, can be viewed as an action non-linearly realizing the "spurious" Weyl symmetry [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The above arguments, however, suggests that something must be wrong: the field σ was "spurious" in GR, but becomes unremovable once GR is supplemented by the Riegert action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Indeed, in the GR action the metric field can absorb the kinetic term of σ, rendering it in the Riegert action infinitely strongly coupled at arbitrarily low energies [6] 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' One way to resolve this problem is to augment the classical GR action and only then couple it to a quantum field theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' I will show that the following action SR− ¯R = M2 � d4x√−g R − ¯ M2 � d4x√−¯g ¯R , (1) with ¯R ≡ R(¯g), M = MPl/ √ 2 >> ¯ M, and the two metric tensors connected as gµν = e2σ ¯gµν , (2) gives a viable low energy theory of gravity with a proper propagating conformal mode, that can be consistently coupled to the Riegert action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The "spurious" Weyl transformation, σ → σ + β(x) , ¯g → e−2β(x)¯g, with an arbitrary β(x), would have made σ gauge-removable in GR, however, this is not a symmetry of the second term in the action (1), and therefore σ can’t be gauged away.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Furthermore, the action (1) possesses a global symmetry that neither the first nor the second term on the r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' separately has (Section 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 2Komargodski and Schwimmer constructed the local diff-invariant Riegert action, and combined it with GR to prove the a-theorem (for an earlier work using a non-local Riegert action for the a-theorem, see [10]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' They used the metric field for symmetry bookkeeping, but its dynamics was unimportant for the proof itself;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' hence the metric was frozen, and only the conformal mode (a dilaton) was utilized [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The issue discussed here does not affect the Komargodski-Schwimmer proof of the a-theorem, since their construction does not require a dynamical tensor field [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' See more comments in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 3 Combining (1) with the results of [2–4], the total effective action that captures the GR trace anomaly equation reads as follows: Seff = SR− ¯R + SA(σ, ¯g) , (3) where SA has the form [2–4] SA = a � d4x√−¯g � σ ¯E − 4 ¯Gµν ¯∇µσ ¯∇νσ − 4( ¯∇2σ)( ¯∇σ)2 − 2( ¯∇σ)4� + c � d4x√−¯gσ ¯W 2, (4) with the Euler (Gauss-Bonnet) invariant, ¯E = ¯R2 µναβ − 4 ¯R2 µν + ¯R2, and the Weyl tensor squared, ¯W 2 = ¯R2 µναβ − 2 ¯R2 µν + ¯R2/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The action (4) emerges as a Wess-Zumino term in a SO(2, 4)/ISO(1, 3) coset, which can be recast as a boundary term in 5D [5];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' this distinguishes (4) from other terms in the effective field theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Note that the terms in SA belong to the general class of the Horndeski theories giving rise to second order equations of motion [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' There are an infinite number of additional higher dimensional counter-terms supplement- ing (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' All these terms will be suppressed by respective powers of the scale M = MPl/ √ 2, or of higher scales, such as M(M/ ¯ M)2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' they will be neglected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' At the same time, SA retains a finite number of terms which are suppressed by ¯ M << M, or by certain geometric mean scales such as ( ¯ MM2)1/3, all of them lower than M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' It is due to the separation of scales between ¯ M and M that one can regard (3) as a meaningful low energy action obtained by "integrating out" quantum loops of a QFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The coefficients a and c depend on numbers and representations of the low energy physical degrees of freedom [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The very same degrees of freedom, could also give rise to classical sources for the gravitational field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Hence the action (3) should be supplemented by the classical action for the fields representing those low energy degrees of freedom, but without quantizing those fields further3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The signature used in this work is "mostly plus", (−, +++).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The Ricci tensor convention is as follows, Rµν = ∂αΓα µν −.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Riegert, following Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' [9], uses the "mostly minus" signature, and the opposite sign for the curvature tensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' [4] uses the "mostly minus" signature, the curvature convention opposite to Riegert’s, and their τ and g are, respectively, −σ and ¯g here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The actions in refs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' [2], [4], and in eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' (4) agree with each other after these different conventions are taken into consideration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 3Not all quantum loops are proportional to the positive powers of ℏ when massive fields are involved, some classical effects emerge from such loops [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Hence, one might worry that without considering the quantum loops some classical effects would be lost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' However, keeping the respective classical fields in the effective action would enable one to retain those classical effects via nonlinear classical perturbation theory (or via exact classical or numerical solutions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 4 2 The R − ¯R theory Consider the action already quoted in the previous section SR− ¯R = M2 � d4x√−g R − ¯ M2 � d4x√−¯g ¯R , (5) with the two metric tensors, g and ¯g, related to one another by (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' At the classical level, the above can be viewed as an action for one metric field – say, the metric g – and for the scalar σ, which gets its proper-sign kinetic term from the ¯R term4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Let us consider metric fluctuations above a flat background,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' hµν = gµν − ηµν,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' and decom- pose them in a standard fashion according to the representations of the 3D rotation group (these approximate well fluctuations about an arbitrary nonsingular classical background at length scales much shorter than the characteristic curvature radius of the background): h00 = 2n ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' h0j = vj + ∂ju ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' hij = tij + ∂iwj + ∂jwi + 2∂i∂jρ − 2δijτ ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' (6) where i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' j = 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' vj and wj are transverse three-vectors,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' tij is a transverse–traceless tensor,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' and τ is the conformal mode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Furthermore, one can choose the gauge, u = 0, ρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Then, the scalar part of the action (5) – which decouples from the tensor and vector parts – equals to the space-time integral of the following expression: 2M2 � −3 ˙τ 2 + (∂jτ)2 − 2n∂2 j τ � − 2 ¯ M2 � −3( ˙τ + ˙σ)2 + (∂j(τ + σ))2 − 2(n + σ)∂2 j (τ + σ) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' (7) If the terms proportional to ¯ M2 were absent, the variation of (7) w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' n would have given a constraint, ∂2 j τ = 0, rendering the conformal mode, τ, non-propagating.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This however is no longer the case in (7): its variation w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' n gives another constraint, ∂2 j τ = ǫ2∂2 j σ/(1 − ǫ2), that relates the conformal mode τ to the scalar σ, τ = ǫ2σ/(1 − ǫ2) (here, ǫ = ¯ M/M << 1, and a zero mode of the Laplacian has been removed by choosing the appropriate spatial boundary conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=') Substituting the latter into (7), one gets − 6M2(1 − ǫ2) ǫ2 (∂µτ) (∂µτ) = − 6 ¯ M2 1 (1 − ǫ2) (∂µσ) (∂µσ) , (8) which is the kinetic term for the conformal mode with the proper sign.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This is the key feature of the R − ¯R theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 4This would correspond to the Einstein frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Section 4 will instead regard this action as a functional of ¯g and σ, corresponding to the Jordan frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 5 It is convenient to rewrite (5) as follows: SR− ¯R = � d4x√−g � M2 R(g) − Φ2 R(g) − 6 (∇Φ)2� , (9) where Φ ≡ ¯ Me−σ, and the covariant derivative, ∇, is that of the metric g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Owing to the choice of the sign of the second term on the right hand side of (5), the kinetic term for the scalar Φ has the proper sign in (9);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' this sign will remain the same after complete diagonalization of the action (9), as shown in Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The scale ¯ M enters the action (9) only through the vacuum expectation value (VEV), ⟨Φ⟩ = ¯ M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Since ¯ M << M, this VEV is within the realm of the effective field theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The last two terms in (9) non-linearly realize a Weyl symmetry which transforms g, and is explicitly broken by the Einstein-Hilbert term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' It is straightforward to check that the action (9) is invariant w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' the following trans- formations of the fields Φ → Φ + ω M (1 − (Φ/M)2) 1 + ω Φ/M , gµν → gµν (1 + ω Φ/M)2 1 − ω2 , (10) where ω ≡ tanh(λ), with λ being an arbitrary constant5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Furthermore, there exists an invariant combination of the metrics g and ¯g ˆgµν = gµν − ǫ2 ¯gµν = gµν � 1 − Φ2 M2 � , (11) and if a QFT coupled to ˆg in a conventional manner, LSM = −1 4 √−ˆg ˆgµνˆgαβFµαFνβ + · · ·, then it will also be invariant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Since one should require, ǫ ≡ ( ¯ M/M) << 1, the coupling to the QFT is approximated by the coupling to the metric gµν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The resulting classical equations approximate GR, as long as ¯ M << M 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 5The symmetry transformations (10) may look mysterious, but their essence is simple: in Appendix it is shown that certain non-linear conformal transformations of g and Φ, bring the action (9) to that of GR coupled to a massless scalar field kinetic term, which is invariant w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' the shift symmetry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The shift transformation, once rewritten in terms of the original variables, gives (10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 6One could consider a different approach in which QFT fields would couple to g, instead of ˆg, thus breaking the symmetry (10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This would result in new terms generated in the effective action due to the QFT loops, notably the mass term for the σ field would be induced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The latter would be proportional to some positive power of the UV scale of the QFT, suppressed by powers of M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Furthermore, σ would couple to the stress-tensor in the linearized approximation, providing a gravity-competing force at distances smaller than inverse mass of σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' One would then need to impose a constraint on ¯ M to suppress the coupling of σ to the stress-tensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' While this is a logical possibility, it would also lead to the trace anomaly equation being modified as compared with that of GR (such modifications can be kept small by imposing constraints on ¯ M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The present work focuses on a scenario where the symmetry (10) is preserved by the QFT coupling, but can be adopted to the case when the matter couples to g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 6 The requirement of the invariance under (10) prohibits certain terms to be added to the action (9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' For instance, an additional kinetic term for Φ, the cosmological constant, √−g, the quadratic mass term, √−gΦ2, and the quartic term, √−gΦ4, don’t respect the symmetry (10), and are prohibited to enter the action with arbitrary coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' That said, there is a particular combination of the latter three terms which is invariant under (10) ± Λ4√−g � 1 − Φ2 M2 �2 , (12) with Λ being some arbitrary dimensionful constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' If the above term is included in the action then both the quadratic and quartic terms for Φ will be connected to the cosmological constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The cosmological constant Λ will be tuned to zero (or be vanishingly small as compared to ¯ M) and this will also nullify the quadratic and quartic terms in (12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The term in (12) is nothing other than ±Λ4√−ˆg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' One could also consider the terms, √−ˆg ˆR, and √−ˆg( ˆ∇Φ)2/(1 − Φ2/M2)2, to be added to the action (9), with arbitrary coeffi- cients;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' these terms would individually respect the symmetry (10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' However, only one linear combination of the above two terms retains the structure of the action (1) intact, which is necessary to preserve the same trace anomaly equation as one gets in GR (see Section 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Therefore, in addition to the fine tuning of the cosmological constant, one needs to adopt another fine tuning of the relative coefficients between the above two terms7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The action (1), and the relation (2), are invariant under the following "duality" transfor- mations: M ↔ ¯ M, g ↔ −¯g, σ → −σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The latter leads to ˆg → ǫ−2ˆg, rendering the following three terms invariant, M4√−ˆg, M2√−ˆg ˆR and √−ˆg( ˆ∇Φ)2/(1 −Φ2/M2)2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' furthermore, cou- pling of ˆg to massless scalars, spinors, and vector fields can be made invariant by inserting appropriate powers of M in front of their kinetic terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Therefore, the above "duality sym- metry" does not help to avoid the need for the fine tuning of the two parameters discussed above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The equations of motion that follow from (9) read: (M2 − Φ2) � Rµν − 1 2gµνR � + � ∇µ∇ν − gµν∇2� Φ2 = 6 � ∇µΦ∇νΦ − 1 2gµν(∇Φ)2 � , − ΦR + 6∇2Φ = 0 , (13) where one can easily include the matter stress-tensor on the r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' of the first equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' It is straightforward to show that a flat FLRW solution of GR is also a solution of (13), amended by the matter stress-tensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 7One could relax this tuning somewhat to obtain the trace anomaly equation that would slightly differ from that of GR;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' this is easy to do but there is no urgency to pursue such extensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 7 3 Toward the quantum effective theory In general, one could quantize (9) as an effective low energy action (see [13], and references to earlier literature).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' However, there is no need for a general approach here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Instead, one could regard gravity as a dynamical classical field coupled to a QFT [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This is justified at energy scales much below the Planck scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Quantum loops of the QFT will generate an infinite number of counter-terms that one needs to include in the effective action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' To deal with the loop divergencies one could use, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=', dimensional regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Due to the symmetry (10) of the action (9) and of its coupling to a QFT, all the counter- terms should also be invariant under (10)8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Hence, one could identify all the symmetry preserving counter-terms by writing all possible invariants in terms of the metric ˆg, expressed in terms of g and Φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Furthermore, the metric ˆg approximately equals to g, up to the corrections of the order of 1/M2, hence, one could just use g as an approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The goal of this section is to understand at what energy scales are those counter terms significant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' To achieve this goal, let us introduce dimensionful fields Hµν = M (gµν − ηµν), Σ = ¯ M − Φ , (14) and consider two different limits of the theory (9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' First, consider the limit M → ∞, while ¯ M is fixed, and H and Σ are finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' It is then straightforward to see that the Lagrangian in (9) reduces to a free field theory of H and Σ −1 4 HµνGµν(H) − 6(∂Σ)2 , (15) where Gµν(H) denotes the linearized Einstein tensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The coupling to the stress-tensor, Tµν, is proportional to HµνT µν/M, up to additional corrections of the order of O � ¯ M2/M3� , and also vanishes in the limit as long as the stress-tensor is held finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' These considerations show that the loop-generated counter-terms in the full nonlinear theory should vanish in the M → ∞ limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Second, consider the limit M → ∞, ¯ M → ∞, with ǫ = ( ¯ M/M) << 1 being fixed;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' the 8I will neglect non-perturbative quantum gravity effects due to, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=', black holes or worm holes, which are expected to violate global symmetries, and in particular to violate (10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Such violations are exponentially suppressed at low energies in the quasi-classical approximation [14];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' they should be contrasted to the violation due to the trace anomaly, which is suppressed only by the powers of the scale ¯ M, and will be included in the effective action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 8 fields H and Σ are held finite in this limit, too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The resulting Lagrangian reads −1 − ǫ2 4 HµνGµν(H) + 2ǫ Σ RL(H) − 6(∂Σ)2 , (16) where RL(H) denotes the linearized Ricci scalar, and the coupling to the matter stress-tensor is proportional to HµνT µν/M, up to the additional corrections of the order of O � ¯ M2/M3� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The expression (16) represents a free field theory of a kinetically mixed tensor and scalar fields, and can easily be diagonalized by a linear conformal transformation of Hµν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Therefore, the counter terms of the full nonlinear theory ought to vanish in the second limit, too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The latter condition is more restrictive than the one obtained from the first limit above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' It implies that there will not exist counter-terms proportional to ¯ Mp/Mq, with p ≥ q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' In particular, the following counter terms ¯ M2 � ¯ M M �n √−g Σ2, � ¯ M M �m √−g Σ4, (17) will be absent for arbitrary integers n ≥ 0 and m ≥ 0 (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=', such terms will not be generated with n ≥ 0 and m ≥ 0 even if they form combinations that are invariant under (10)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Let us now summarize the results of the above discussions in terms of the fields of the action (9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' One concludes that the counter-terms in (9) will be proportional to R3 M2, R□R M2 , Σ2R2 M2 , Σ2R□R M4 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' (18) Each of these counter-terms will have corrections that make them invariant under (10), however, each of these corrections are higher order in 1/M2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' All of these counter terms are suppressed by M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Furthermore, due to the VEV, ⟨Φ⟩ = ¯ M, some of the higher dimensional operators will end up depending on ¯ M, too: ¯ M2R2 M2 , ¯ M2R□R M4 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' (19) The lowest scale that suppresses the latter operators is M(M/ ¯ M), which is significantly higher than M, thanks to ¯ M << M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' All the arguments above apply to conventional local counter-terms encountered in the conventional perturbative series expansion in powers of ∇2/M2 [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This however says nothing about possible non-local terms that may arise from the loops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Due to their nonlocal nature such terms could remain significant even when ∇2 << M2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The trace anomaly introduces precisely such terms, which can then be rewritten as local term but at the expense 9 of introducing a new field, σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The new terms in the effective action are suppressed by the scale much smaller than M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Hence, it is meaningful to keep the trace-anomaly induced terms, but ignore all the other conventional higher dimensional counter terms suppressed by M and higher scales, as done below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 4 The effective action The scale anomaly in the trace of a massless QFT stress-tensor reads [1]: T µ µ = aE(g) + cW 2(g) + b□R(g) , (20) where the coefficients a and c depend on the field content of the theory, while b is arbitrary, and for that reason will not be included9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The goal is to find an effective action, ˜SA(g), which incorporates the loop effects of the QFT so that the trace of its metric variation gives (20) T µ µ = gαβ 2 √−g δ ˜SA(g) δgαβ = aE(g) + cW 2(g) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' (21) Riegert [2], argued that ˜SA(g) cannot be written as a local diff-invariant functional if one uses only one field, g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Yet, the variation of the total action, SGR(g) + ˜SA(g), should give rise to a local equation for the trace anomaly, with the r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' defined by (21).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' To find the action, Riegert introduced a decomposition, gµν = e2σ ¯gµν, with the metric ¯g restricted to have a fixed determinant [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Using this decomposition, Riegert constructed a local action, SA(¯g, σ), such that the variation of SGR(e2σ¯g)+SA(¯g, σ) w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' σ gives the trace anomaly equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The functional SA(¯g, σ) was regarded in [2] as breaking diff-invariance since the determinant of ¯g had been fixed [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' It is however straightforward to restore diff-invariance in SA(¯g, σ): one could use exactly the same action, SA(¯g, σ), but declare that the restriction on the determinant of ¯g has been lifted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Conversely, such a diff-invariant SA(¯g, σ) would yield back Riegert’s original local but non-invariant SA(¯g, σ), after gauge-fixing the determinant of ¯g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Logically, Komargodski and Schwimmer [4] reconstructed Riegert’s action SA(¯g, σ), as a functional of two fields, σ and ¯g, without assuming any constraint on ¯g, only requiring that the action reproduce the trace anomaly upon simultaneous Weyl transformations of σ and ¯g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 9Both, b, and the gauge field trace anomaly [7,8], can easily be included in the effective action [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 10 Furthermore, the same SA(¯g, σ) was obtained as a Wess-Zumino term in a coset construc- tion for non-linearly realized conformal symmetry [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The above facts suggest that the local and invariant Riegert action, SA(¯g, σ), given in (4), should perhaps play a more fundamental role than it did in Riegert’s construction, where it was merely used as an intermediate step toward motivating a non-local action10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' To this end, one could add the action (4) to the GR action expressed in terms of σ and ¯g, SGR(e2σ¯g), and treat both σ and ¯g as dynamical fields without assuming any constraint on ¯g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This theory, however, describes an infinitely strongly coupled system when restricted to its Minkowski background [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' It is a strongly coupled theory with an unacceptably low strong coupling scale on any small curvature background.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This is so because the theory, SGR(e2σ¯g) + SA(¯g, σ), does not contain a kinetic term for σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' An apparent kinetic term for σ in SGR(e2σ¯g) can be removed by a field redefinition of the metric, ¯g = e−2σg, rendering the σ field endowed with nonlinear interactions in SA(g, σ), but without a quadratic kinetic term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Such a theory is intractable as an effective field theory11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' To bring some normalcy, one needs to introduce a kinetic term for σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' However, adding in any new term explicitly depending on σ – besides the ones already present in (4) – would spoil the recovery of the correct trace anomaly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' One way out is to use the R − ¯R theory as a functional of ¯g and σ, where the new term, ¯R, does not depend on σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This leads one to explore a new action consisting of the R − ¯R theory plus (4): Seff(¯g, σ) = M2 � d4x√−¯g e2σ � R(¯g) + 6 ( ¯∇σ)2� − ¯ M2 � d4x√−¯g R(¯g) + a � d4x√−¯g � σ ¯E − 4 ¯Gµν ¯∇µσ ¯∇νσ − 4( ¯∇2σ)( ¯∇σ)2 − 2( ¯∇σ)4� + c � d4x√−¯g σ ¯W 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' (22) This is a local diff-invariant functional of σ and ¯g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' It can be viewed as an effective action for quantized σ and QFT 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The classical matter fields – not shown in (22) – couple to the metric ˆgµν = gµν(1 − ¯ M2e−2σ/M2) ≃ gµν = e2σ¯gµν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Variation of the action (22) with respect to σ, with a subse- 10Riegert’s subsequent procedure of constructing a non-local but diff-invariant functional introduces a four derivative term in the action [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This rout unavoidably leads to a negative energy state, a ghost (or to violation to unitarity).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' It is also an unnecessary rout – as argued, a local and diff-invariant effective action for the trace anomaly does exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' More general actions with four-derivative terms were explored in [15], with some potentially interesting applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Regretfully, these actions also have ghosts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 11This is not an issue for the work [4] insofar as ¯g is not a dynamical field and is frozen to equal to the flat space metric;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' in the latter case the GR action gives a kinetic term for σ, albeit with a wrong overall sign.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This sign can be flipped to the correct one by choosing a "wrong sign" GR term to start with;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' this does not cause a problem in [4] since the theory does not use the dynamics of tensor perturbations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 12Note that the Riegert action in the second line in (22) contains the Galileon terms that are not suppressed by M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' These can be understood as terms obtained by "integrating in" the σ field to make the otherwise nonlocal anomaly action be local.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' In such a case, not all the terms containing σ should be suppressed by M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Conversely, integrating out σ would lead to nonlocal terms, all of them suppressed by M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 11 quent substitution of ¯g = e−2σg, gives an equation that does not depend on σ δSeff(σ, ¯g) δσ |¯gµν=e−2σ gµν = 0 , ⇒ 2M2R = aE(g) + cW 2(g) , (23) which is exactly the trace-anomaly equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Now that the tensor ¯g is an unconstrained dynamical field, one should vary (22) w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' ¯g, too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This variation will give ten modified Einstein’s equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' One can show that the trace of these equations does not coincide with the equation (23).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Thus, there are eleven independent equations: one equation of motion of σ, and ten equations for ten components of the symmetric tensor ¯g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' By determining σ and ¯g from these equations, one can determine ˆg ≃ g, which is the metric experienced by the classical matter fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The modified Einstein equations approximate well the conventional equations as long as ¯ M << M, and as long as the stress-tensor is much smaller than ¯ M4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Alternatively, one could rewrite the ten equations for ¯g as ten equations for g, which would also depend on σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The σ equation, on the other hand, can entirely be kept in terms of g (23).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Thus, there would still be eleven equations for eleven unknowns, g and σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The theory given by (22) is strongly coupled at the scale ¯ M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This can be seen in the flat space expansion of (22) obtained either by taking the limit M → ∞, or by the substitution gµν = ηµν � 1 − Φ2/M2�−1 , ¯gµν = e−2σ � 1 − Φ2/M2�−1 ηµν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' (24) In either case, the theory reduces to a conformal Galileon of the canonically normalized field, π ≡ σ ¯ M [16], with the nonlinear Galileon terms suppressed by the scale ¯ M Seff|¯gµν≃e−2π/ ¯ Mηµν ≃ � d4x � −6e−2π/ ¯ M(∂π)2 + 4a � −∂2π(∂π)2 ¯ M3 + (∂π)2(∂π)2 2 ¯ M4 � + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' � , (25) where the dots stand for sub-leading terms suppressed by powers of M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Below the scale ¯ M the full theory (22) is weakly coupled and describes three degrees of freedom – two helicity states of a massless graviton, and one massless scalar, π ≡ σ ¯ M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The later becomes strongly coupled at the scale of the order of ¯ M, as seen from the Galileon terms in (25).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The Lagrangian (25) makes it clear that the theory without the ¯ M2√−¯g ¯R term is untenable – taking ¯ M → 0 leaves the nonlinear terms in (25) infinitely strongly coupled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The additional scalar, σ, does not couple to the stress tensor in the linearized approxima- tion in Minkowski space, therefore there is no fifth force constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' It can couple to matter on curved backgrounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Physical consequences of the action (22) will be studies elsewhere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 12 Acknowledgments: I’d like to thank Massimo Porrati, David Spergel, and Giorgi Tukhashvili for useful discussions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The work was supported in part by NSF grant PHY- 2210349.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Appendix A complete diagonalization of (9) can be done by using the following conformal transforma- tion gαβ = ˆgαβ � 1 − Φ2 M2 �−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' (26) It brings the action for the metric ˆg to the canonical form SR− ¯R = � d4x � −ˆg � M2 R(ˆg) − 6 ( ˆ∇Φ)2 (1 − Φ2/M2)2 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' (27) The above conformal transformation is non-singular, and (27) is sensible, as long as Φ = ¯ Me−σ << M, ⇒ σ >> − ln(M/ ¯ M) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' (28) In addition, the effective theory is valid if |σ| << 1, which is a stronger constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The scalar field action in (27) can be simplified further: it can be reduced to a free field interacting with gravity, thanks to the following non-linear field redefinition U = �1 + Φ/M 1 − Φ/M �1/2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' (29) The resulting action reads: SR− ¯R = M2 � d4x � −ˆg � R(ˆg) − 6 (U−1 ˆ∇U)2� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' (30) The latter makes the symmetries of the sigma model more explicit: the Lagrangian is invari- ant w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' the rescaling U → eλ U , (31) where λ is an arbitrary constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' This symmetry is the shift symmetry of the scalar field ln(U) that has only a kinetic term in (30).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 13 Note that ˆg and U are related to ¯g and σ through nonlinear transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' However, the path integral Z(ˆg) does not equal to the path integral Z(g).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' In other words, the nonlinear conformal transformation from ¯g to ˆg, and the quantization procedure for the scalar and QFT, do not commute with one another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' If one were to start with (30) and combine it with the Riegert action for ˆg, one would get an infinitely strongly coupled theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Instead, the Riegert action that needs to be added to (30) can be obtained from the Riegert action for ¯g, by using the respective nonlinear conformal transformation from ¯g to ˆg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' The obtained action will be strongly coupled at the scale ¯ M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' References [1] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Capper and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Duff, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' A 53, 361 (1975) M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Duff, Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' B 125, 334-348 (1977) S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Christensen and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Duff, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' B 76, 571 (1978) [2] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Riegert, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' B 134, 56-60 (1984) [3] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Fradkin and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Tseytlin, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' B 134, 187 (1984) [4] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Komargodski and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Schwimmer, JHEP 12, 099 (2011) [arXiv:1107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='3987 [hep-th]].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' [5] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Gabadadze and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Tukhashvili, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' D 102, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='2, 024054 (2020) [arXiv:2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='01729 [hep-th]].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' [6] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Bonifacio, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Hinterbichler and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Johnson, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' D 102, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='2, 024029 (2020) [arXiv:2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='10716 [hep-th]].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' [7] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Crewther, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 28, 1421 (1972) [8] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Chanowitz and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Ellis, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' B 40, 397-400 (1972) [9] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Birrell, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Davies, "Quantum Fields in Curved Space", Cambridge University Press, 1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' [10] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Anselmi, Annals Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 276, 361-390 (1999) [arXiv:hep-th/9903059 [hep-th]].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' [11] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Horndeski, Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Theor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 10, 363-384 (1974) [12] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Holstein and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Donoghue, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 93, 201602 (2004) [arXiv:hep- th/0405239 [hep-th]].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' [13] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Donoghue, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' D 50, 3874-3888 (1994) [arXiv:gr-qc/9405057 [gr-qc]].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' [arXiv:gr-qc/9512024 [gr-qc]].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' [arXiv:1209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='3511 [gr-qc]].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 14 [14] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Kallosh, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Linde, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Linde and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Susskind, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' D 52, 912-935 (1995) [arXiv:hep-th/9502069 [hep-th]].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' [15] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Antoniadis and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Mottola, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' D 45, 2013-2025 (1992) I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Antoniadis, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Mazur and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Mottola, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' D 55, 4770-4784 (1997) [arXiv:hep-th/9509169 [hep-th]].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 79, 14-17 (1997) [arXiv:astro-ph/9611208 [astro-ph]].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Mot- tola, JHEP 11, 037 (2022) [arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='04703 [hep-th]].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' [16] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Nicolis, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Rattazzi and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Trincherini, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' D 79, 064036 (2009) [arXiv:0811.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content='2197 [hep-th]].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} +page_content=' 15' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_9FQT4oBgHgl3EQfLzUR/content/2301.13265v1.pdf'} diff --git a/_NFRT4oBgHgl3EQfsjc9/vector_store/index.pkl b/_NFRT4oBgHgl3EQfsjc9/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..017842d446a1f179750995655fcde3d222dbd65a --- /dev/null +++ b/_NFRT4oBgHgl3EQfsjc9/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ce56d363f7138ef2cd7b1058b45796c70d36da33af278caffbf53ead1e45d88 +size 62600 diff --git a/aNE0T4oBgHgl3EQf4QKS/content/2301.02736v1.pdf b/aNE0T4oBgHgl3EQf4QKS/content/2301.02736v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..cf3258e1af95d595c0f259e3fb1779ce66862818 --- /dev/null +++ b/aNE0T4oBgHgl3EQf4QKS/content/2301.02736v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a5d75d02ef08dd7acd9c0b929049ec1dc00d987130284a444ef86c653672f25 +size 2010552 diff --git a/aNE0T4oBgHgl3EQf4QKS/vector_store/index.pkl b/aNE0T4oBgHgl3EQf4QKS/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..e35518c233a02e610988d3756e26b3ed04872f79 --- /dev/null +++ b/aNE0T4oBgHgl3EQf4QKS/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c5620e4f3d66872b4b4f67e4f1a59a4ff173d8423c08caba715b95e3f84b708 +size 87092 diff --git a/dNE5T4oBgHgl3EQfgA8L/content/2301.05630v1.pdf b/dNE5T4oBgHgl3EQfgA8L/content/2301.05630v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e6b3d81a488b20ae74ac10e684ed206e9ac02ded --- /dev/null +++ b/dNE5T4oBgHgl3EQfgA8L/content/2301.05630v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25d3db94ce3edcd40b211096121924625274916cffeaa1cb7bf38dd67b8779c1 +size 279097 diff --git a/dNE5T4oBgHgl3EQfgA8L/vector_store/index.faiss b/dNE5T4oBgHgl3EQfgA8L/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..874c6adac3208cd1132d37ae8001a5f04e7c1b1d --- /dev/null +++ b/dNE5T4oBgHgl3EQfgA8L/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e0c312885e4f6d3007581bded0acdd998f2f0736a82e6f4c7f6df1f7c0eb0f0a +size 2555949 diff --git a/dNE5T4oBgHgl3EQfgA8L/vector_store/index.pkl b/dNE5T4oBgHgl3EQfgA8L/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..06510495118972aa7dabd104dda52cbc122ee3cc --- /dev/null +++ b/dNE5T4oBgHgl3EQfgA8L/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e19cdefd5d364ea51b258b55a7adad94390bc7b5cea9b84e869b52b9af7f7f7 +size 123147 diff --git a/edE1T4oBgHgl3EQfLgP4/content/2301.02979v1.pdf b/edE1T4oBgHgl3EQfLgP4/content/2301.02979v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..39060420a20cb93b6ed504c780079a1c4b37b55b --- /dev/null +++ b/edE1T4oBgHgl3EQfLgP4/content/2301.02979v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:698184c988bc2f0a5684bf3df40c4f18eebd5ef4d0ba2ad3b7a440416c336dcd +size 5699204 diff --git a/etE2T4oBgHgl3EQfxghO/content/2301.04111v1.pdf b/etE2T4oBgHgl3EQfxghO/content/2301.04111v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3a7735c7b364619077b2eed7de7c732ea9bd8677 --- /dev/null +++ b/etE2T4oBgHgl3EQfxghO/content/2301.04111v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:872222fd962d70d147d9d2a818da723b5ef97402b5edce39c456723730a6d974 +size 406815 diff --git a/etE2T4oBgHgl3EQfxghO/vector_store/index.pkl b/etE2T4oBgHgl3EQfxghO/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..c4c6171964a52cfade88885ddd21bfd8e6a11fc1 --- /dev/null +++ b/etE2T4oBgHgl3EQfxghO/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:94e2202895f52c24574b95b32171adbc307d0e57b1d4fcf7ec93b4639d0e0e70 +size 181519 diff --git a/f9E0T4oBgHgl3EQf6AK-/content/tmp_files/2301.02759v1.pdf.txt b/f9E0T4oBgHgl3EQf6AK-/content/tmp_files/2301.02759v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..a1dc636391fa12c29cbe1599ba42997a7986cc79 --- /dev/null +++ b/f9E0T4oBgHgl3EQf6AK-/content/tmp_files/2301.02759v1.pdf.txt @@ -0,0 +1,1502 @@ + +1 + + +Two-dimensional topological insulator state + in cadmium arsenide thin films + +Alexander C. Lygo, Binghao Guo, Arman Rashidi, Victor Huang, Pablo Cuadros-Romero +and Susanne Stemmera) +Materials Department, University of California, Santa Barbara, California 93106-5050, USA + + +a) Corresponding author. Email: stemmer@mrl.ucsb.edu + + + + +2 +Abstract +Two-dimensional topological insulators (2D TIs) are a highly desired quantum phase but +few materials have demonstrated clear signatures of a 2D TI state. It has been predicted that 2D +TIs can be created from thin films of three-dimensional TIs by reducing the film thickness until +the surface states hybridize. Here, we employ this technique to report the first observation of a 2D +TI state in epitaxial thin films of cadmium arsenide, a prototype Dirac semimetal in bulk form and +a 3D TI in thin films. Using magnetotransport measurements with electrostatic gating, we observe +a Landau level spectrum and quantum Hall effect that are in excellent agreement with those of an +ideal 2D TI. Specifically, we observe a crossing of the zeroth Landau levels at a critical magnetic +field. We show that the film thickness can be used to tune the critical magnetic field. Moreover, +a larger change in film thickness causes a transition from a 2D TI to a 2D trivial insulator, just as +predicted by theory. The high degree of tunability available in epitaxial cadmium arsenide +heterostructures can thus be used to fine-tune the 2D TI, which is essential for future topological +devices. + + + + +3 +Two-dimensional topological insulators (2D TIs) are a highly sought-after phase owing to +their spin-polarized counterpropagating (helical) edge states, which are of great interest for their +ability to host novel physical phenomena, such as the quantum spin Hall effect [1-3], and for +topologically protected quantum computing [4]. 2D TIs are defined as possessing an inverted +(relative to a normal semiconductor band structure) and gapped band structure which together, by +the bulk-boundary correspondence, give rise to ℤ 2 topological order and novel edge states [1]. +Despite intensive research and a number of theoretically proposed systems [5], experimentally +confirmed 2D TIs are, however, very rare. Materials for which experimental signatures indicative +of a 2D TI state have been reported include quantum well structures [6-8] and monolayer van der +Waals compounds [9, 10]. +An alternative, and largely unexplored, route to a 2D TI is thin films of three-dimensional +(3D) TIs whose film thickness is reduced such that there is a spatial overlap of the surface state +wavefunctions [11-13]. In this case, the 3D TI’s surface states gap out, forming degenerate +massive Dirac states, and the material is a 2D TI if there is inversion between the confinement +induced electron-like and hole-like subbands [13, 14]. An additional requirement is the absence +of a strong potential difference between the top and bottom surfaces of the film (so called structural +inversion asymmetry or SIA) as this introduces additional coupling between the massive Dirac +states that, if strong enough, destroys the 2D TI phase [13, 15, 16]. Furthermore, as a function of +film thickness, it is predicted that changes in the subband ordering can cause transitions between +a 2D TI and 2D trivial insulator states. The thicknesses when these transitions are expected to +occur depend sensitively on band parameters and microscopic details of the system [12]. Hence, +it is of interest to explore high quality thin films of 3D TIs prepared with controlled thicknesses +and negligible SIA. + + +4 +In this study, we employ the confinement approach to epitaxial thin films of cadmium +arsenide (Cd3As2). While Cd3As2 is a prototype 3D Dirac semimetal with large band inversion in +bulk [14, 17-20], it is a nearly ideal 3D TI in thin films [21, 22] with high surface state mobility +and no parasitic bulk conduction at low temperature. As the Cd3As2 film thickness is further +reduced, theory predicts a transition to a 2D TI (quantum spin Hall insulator) with a wide energy +gap [14]. Recently, we reported evidence of surface state hybridization in a 20 nm-thin Cd3As2 +thin film [23], an essential step towards a 2D TI. Here, we present the first experimental evidence +of a 2D TI state in (001)-oriented Cd3As2 films. To this end, we use Landau level spectroscopy, +which, as we will discuss next, can unambiguously identify surface state hybridization and the +inversion of the bands from the behavior of the zeroth Landau levels in high-quality films, as well +as other essential details, such as SIA. Moreover, just as predicted by the theoretical models, we +show that a small (6 nm) change in film thickness causes a transition from a 2D TI to a 2D trivial +insulator. +A hallmark of a 2D TI in a perpendicular magnetic field (B) is a crossing of two zero- +energy (n = 0) Landau levels at a critical field (Bc), as shown in Fig. 1(a), and a zero conductance +(ν = 0) quantum Hall plateau when the chemical potential lies in the energy gap between the n = 0 +Landau levels (after refs. [15, 16]; for details of the calculations, see the Supplementary Materials +[24]). Because of band inversion, one of the two n = 0 Landau levels is electron-like and originates +from the valence band and the other is hole-like and originates from the conduction band. At Bc, +the system undergoes a phase transition from a nontrivial insulator to a trivial one. Concurrently, +the ν = 0 quantum Hall plateau disappears but then reemerges for B > Bc. The zero energy Landau +level crossing and re-entrant quantum Hall effect at Bc are robust signatures of a 2D TI state [6], +while its other key feature, the helical edge states and quantum spin Hall effect, are easily obscured + + +5 +by trivial edge conduction paths [27, 28] and by their extreme sensitivity to disorder [29-31]. The +zeroth Landau level crossing at Bc distinguishes the 2D TI from all other potential states of a +topological thin film. For example, Fig. 1 shows calculated [15, 16] Landau fan diagrams of +several other possible states [24], such as the Landau level spectrum of a hybridized 3D TI without +band inversion (a film in the trivial thickness regime, for example) [Fig. 1(b)]. Here, because band +inversion is absent, all electron-like Landau levels originate from the conduction band and all hole- +like ones from the valence band. Thus, the n = 0 Landau levels never cross and the ν = 0 quantum +Hall persists and widens with increasing B. If the film is in the nontrivial thickness regime but +strong SIA is present, the crossing of the n = 0 Landau levels of the 2D TI becomes an anti-crossing +[Fig. 1(c)], and the system is also a trivial insulator. Additionally, the ν = 0 quantum Hall plateau +is present at all B but, in contrast to the preceding case, its width evolves nonmonotonically with +increasing B. Finally, Fig. 1(d) shows that a ν = 0 quantum Hall state can also be observed for a +non-hybridized (thick) 3D TI film when there is SIA and when the chemical potential lies between +the n = 0 Landau levels of the top and bottom surfaces [32]. In this case, the n = 0 Landau levels +are non-dispersing, as has indeed been observed for several 3D TIs [33-36], and the width of the + = 0 quantum Hall plateau is constant in B. +The presence of SIA can also be detected by characteristic crossings of Landau levels at +higher energies, as seen in Figs. 1(c) and 1(d). This latter feature results in complicated filling +factor (ν) sequences in the quantum Hall effect as a function of carrier density and magnetic fields, +as observed for thicker (~ 50 nm) Cd3As2 films that are in the 3D TI state [21]. No such crossings +occur without SIA [see Figs. 1(a) and 1(b)]. +Combined, these distinguishing features, especially the dispersion of the n = 0 Landau +levels and a reentrant ν = 0 quantum Hall plateau, provide experimental signatures of the four + + +6 +different possible phases in thin films. Clearly, it is essential to tune the Fermi level to charge +neutrality and into the gap between the n = 0 Landau levels to distinguish these phases. +To study the electronic state of very thin Cd3As2 films, we performed low temperature +magnetotransport measurements on top gated Hall bar structures. Details about sample growth, +device fabrication, transport measurements can be found in the Supplementary Information [24]. +Figures 2 (a,b) show the longitudinal (σxx) and Hall conductivity (σxy), respectively, of a 20 nm +(001)-oriented Cd3As2 film (sample 1), calculated by tensor inversion from the resistivities (shown +in the Supplementary Information [24]), as a function of applied top gate bias, Vg, and B (for a plot +of Vg versus carrier density, see Supplementary Information [24]). The labels in Fig. 2(a), marking +the σxx minima, denote the filling factors of the corresponding integer quantum Hall plateaus in +Fig. 2(b). We observe a sequence of Landau levels that produce well defined quantum Hall +plateaus with both even and odd ν. Taken alone, these filling sequences would be consistent with +any of the states in Fig. 1. As discussed above, the low energy portion of the spectrum, around ν += 0, is crucial to distinguish them. +We thus first turn our attention to the region around charge neutrality (-1.5 > Vg > -2 V). +The first notable feature is the existence of a gap at zero B, as evidenced by the very low +conductivity (σxx  0.06 e2/h) and insulating behavior (the temperature dependence is discussed +below). The insulating state at B = 0 shows that the top and bottom Dirac surface states are +hybridized and, as a result, are gapped out. Moreover, the salient feature here are two Landau +levels that originate at different Vg and, as B is increased, converge, meeting at approximately Bc += 9.4 T, before diverging for larger B. Concurrently, the ν = 0 quantum Hall plateau vanishes and +then remerges. This latter feature can be seen especially clearly in the σxy traces shown in Fig. +2(b): here,  = 0 is present at low B (yellow-green traces), absent at intermediate B, and reentrant + + +7 +at B > 10 T (dark blue traces) (for additional clarity, see also the Supplementary Material for a +zoom in of the σxy traces around the ν = 0 plateau [24]). Accordingly, we identify the Landau +levels crossing at Bc = 9.4 T as the n = 0 levels. In the regions where these two Landau levels are +well separated, σxx is approximately zero and σxy plateaus at zero. By contrast, where σxx is finite +(around Bc), σxy changes sign smoothly (Fig. S5 [24]). The Landau level fan diagram is thus in +perfect agreement with that of an idealized prototype 2D TI shown in Fig. 1(a). It is not in +agreement with either of the possible other states in Fig. 1. We note that it is the significant +dispersion (in B) of both n = 0 levels, and the absence of any other states nearby, that allows for +the clear re-entrant ν = 0 quantum Hall plateau. By comparison, for HgTe quantum wells the p- +type n = 0 quantum Hall plateau is nearly non-dispersive (indicative of a low Fermi velocity) and +has a more complex dispersion [37, 38], causing the re-entrant quantum Hall effect to be rather +complex (e.g., ν = 0 → ν = 1 (-1) → ν = 0 [6]). +Next, we discuss the Landau level spectrum away from the charge neutrality point, +important for understanding the stability of the 2D TI state in this film. There are two Landau +levels, which are marked with arrows in Fig. 2(a) from a higher energy, bulk-originated subband. +This is evident by the fact that they can be traced back to a different point on the Vg axis (see Fig. +S6 where lines have been drawn for clarity to show the intercepts of the Landau levels with the Vg +axis [24]). When the chemical potential crosses either of these additional Landau levels, ν changes +by 1 [see corresponding region in Fig. 2(b)], indicating that they are non-degenerate. The existence +of electronic subbands is consistent with expectations for a quantum confined thin film [12, 14]. +The second important feature of the Landau level spectrum is the fact, that, with the +exception of the n = 0 Landau levels, Landau levels that originate from the electronic states near +charge neutrality do not cross. As discussed above, the absence of any other crossings in the + + +8 +Landau fans to which these n = 0 levels belong shows that there is no perceptible energy offset +(SIA), although it is present for thicker films in similar structures [21, 22]. While it is perhaps not +surprising that a 20 nm thin film with a small gap does not sustain a potential offset, the absence +of strong SIA is an essential pre-requisite for the observed 2D TI state. Finally, in the region Vg < +-2 V we observe a deep minimum in σxx and a plateau in σxy corresponding to ν = -1. At more +negative Vg we do not observe additional quantum Hall plateaus, possibly because the chemical +potential resides in an electronic band of the buffer layer, causing spillover of carriers, or because +of a high density of Landau levels from the more heavy, lower energy, valence band states (e.g., +similar to HgTe quantum wells). +To demonstrate the sensitivity of the 2D TI state to small changes in film thickness, we +performed additional transport measurements on 18 nm, 19 nm, and 22 nm Cd3As films prepared +under nominally identical conditions as sample 1 (see Fig. 3). The main result is a qualitatively +identical Landau level spectrum. The primary quantitative differences as a function of film +thickness are: (i) with increasing film thickness Bc increases, which is similar to the thickness +dependence observed in HgTe quantum wells [6], and (ii) the bulk-originated subband moves to +higher Vg, consist with expected behavior for quantum confined thin films. The similarity of the +four Landau level spectra is in agreement with our previous observations, namely a qualitative +consistency of higher energy Landau levels (n > 0) behavior with thickness [39]. However, we +now see clearly the opening of a hybridization gap, which is most dramatically displayed in the +behavior of the zero energy Landau levels, whose behavior extremely sensitive to the thickness. +As the film thickness is further decreased, it is predicted [14] that changes in the subband +ordering cause a transition from 2D TI to a 2D trivial insulator. Figure 4 shows σxx and σxy versus +Vg and B for a 14 nm film prepared under similar conditions as those discussed above. + + +9 +Qualitatively, away from the charge neutrality point [Figs. 4(a) and 4(b)], the data is strikingly +similar to the other samples. Quantum Hall plateaus with both even and odd ν, indicating a +degeneracy lifting, are present, but no crossings of higher energy Landau levels in the low energy +fans are evident, demonstrating again the absence of SIA. Distinct from the other films, the Landau +levels that originate from states of a higher energy subband are absent in the Vg range studied here, +consistent with the expectation that the subband ordering changes as a function of film thickness. +The important difference of this film is seen near charge neutrality. For all samples, there is a +clear insulating state at B = 0 accompanied by σxy = 0, indicating hybridization of the surface states; +for the 14 nm film, however, the n = 0 Landau levels diverge and the ν = 0 quantum Hall plateau +widens with increasing magnetic field. The Landau level spectrum of the 14 nm film is in +remarkably good agreement with that of a 2D trivial insulator shown in Fig. 1(b). We conclude +from this that a 2D TI state in Cd3As2 can be achieved in 18-22 nm films while a small reduction +in the film thickness (to 14 nm) causes a transition from a 2D topological insulator to a trivial one, +consistent with the change in the subband ordering that is evident in the higher energy spectrum. +Small discrepancies between the thickness ranges for the different phases in the observations vs. +predictions (ref. [14]) can easily be explained by the fact the microscopic details of the +heterostructures, which determine important parameters such as the Fermi velocity, were not +considered in the models. +In summary, we have observed the hallmarks of a nearly ideal 2D TI state in thin films of +(001)-oriented Cd3As2, including insulating behavior at zero field and an energy gap between two +n = 0 Landau levels that closes at Bc. The zeroth Landau level crossing is remarkably well resolved, +aided by relatively high Fermi velocities of the electronic states that give rise to the zero-energy +Landau levels, low disorder, and a good separation in energy from, e.g., a high density of low + + +10 +mobility valence band states found in other systems [37, 40]. These results establish that thin films +of Cd3As2 constitute a new member of the very small, highly sought-after family of 2D TIs +discovered to date. Crucially, this 2D TI state is realized via a previously theoretically suggested +route, namely by quantum confinement. We also demonstrated that reducing the film thickness +further induces a transition to a 2D trivial insulator, also consistent with theoretical predictions. +The wide range of additional heterostructure parameters that can be tuned, such as film strain, +makes the 2D TI phase in Cd3As2 films extraordinarily tunable. This tunability could prove +extremely useful in designing and testing future superconducting hybrid junctions for quantum +information systems, which depend on finely tuned energy scales, and novel correlated states [41]. +This study provides clear directions for the future work such as more detailed study of the thickness +dependence of the electronic state of Cd3As2 films, particularly one that includes ultrathin (< 10 +nm) films. Finally, the results presented here demonstrate the possibility of realizing the quantum +spin Hall effect in thin films of Cd3As2 and a next step should be investigations of the edge states +physics, which requires smaller devices and low defect density mesa boundaries. +Acknowledgements +The authors are grateful to Andrea Young and Xi Dai for very helpful discussions. The research +was supported by the Air Force Office of Scientific Research (Grant No. FA9550-21-1-0180) and +by the Office of Naval Research (Grant No. N00014-21-1-2474). A.C.L and B.G. also thank the +Graduate Research Fellowship Program of the U.S. National Science Foundation for support +(Grant Nos. 1650114 and 2139319). This work made use of the MRL Shared Experimental +Facilities, which are supported by the MRSEC Program of the U.S. National Science Foundation +under Award No. DMR 1720256. + + + + +11 +Appendix A: Temperature dependent conductance at ν = 0 in the 2D TI state +To further investigate the 2D TI, including the gapped state at low B, we performed temperature +dependent two-point conductance (G) measurements, shown in the Supplementary Information +[24]. Figure S13 [24] shows the temperature dependence of the minimum G of film 1 between the +n = 0 Landau levels at B < 8 T [Fig. S13(a)], around Bc [Fig. S13(b)], and for B > 12 T [Fig. S13(c)]. +For B < 8 T and B > 12 T, respectively, G shows an exponential-like increase with increasing +temperature, consistent with a gapped state. In these two regimes, the observed temperature +dependence can be described 𝐺(𝑇) = 𝐺0exp[−(𝑇0/𝑇)𝑝], where 𝐺0 is a temperature independent +prefactor, T0 is the characteristic hopping temperature and p is a model parameter that depends on +the density of states at the Fermi level and takes the values 0 < p < 1 (see Supplementary +Information [24]). A value of p = 1 corresponds to Arrhenius behavior and T0 = Ea/kB where Ea is +the activation energy and kB is Boltzmann’s constant. We find that for B < 8 T [Fig. S13(a)], the +temperature dependence of G can be described by p = 1/3, i.e., 2D Mott variable range hopping +(VRH), over some temperature range (see dashed lines). At intermediate B (7 T < B < 13 T), G +shows dramatically different behavior [Fig. S13(b)]. Within this range, but away from Bc, G is a +nonmonotonic function of temperature with dG/dT > 0 at low temperatures that transitions to T- +linear dependence with a negative slope at higher temperatures. At Bc (B = 9 T and B = 10 T +traces), G shows metallic behavior (it increases monotonically with decreasing temperature) and +is approximately linear in temperature above 12.5 K. For B ≥ 13 T [Fig. S13(c)], the temperature +dependence roughly follows Arrhenius behavior above 3.5 K, consistent with a clean, gapped state. +Most importantly, the crossover from insulating to metallic temperature dependence occurs around +B = Bc, consistent with a crossing of the n = 0 Landau levels. Near Bc, the maximum value of σxx +is ~ 0.85 e2/h (see Supplementary Information Fig. S14 for σxx values between the n = 0 Landau + + +12 +levels [24]). Interestingly, this σxx value is close to twice of the universal value of 0.5 e2/h [42], +consistent with the crossing of two Landau levels – by contrast, at the other quantum Hall +transitions, σxx is close to 0.5 e2/h. (we note that G, presented in Fig. S13, is smaller than σxx, +shown, due to the contribution of contact/series resistances, including the ungated regions near the +contacts). The origin of the “strange metal” (T-linear) behavior at the crossing of hole-like and +electron-like zeroth Landau levels and, more generally, the nature of this “Dirac-like” state, +warrants further investigations, including theoretically. + +Appendix B: Detectability of the helical edge states ion the 2D TI state +We briefly comment here on the detectability of helical edge states, expected to be present +for B < Bc. It is generally accepted in the 2D TI literature [9, 43, 44] that much smaller devices +than those studied here, with carefully prepared edges [31], are needed to characterize the very +fragile helical edge states, because the dimensions of the devices studied here exceed the phase +coherence length (~1 µm). In devices larger than this dimension, it is generally found that σxx is +smaller than 2e2/h at all temperatures [9, 43, 44]. Future experiments will address fabrication +challenges for smaller devices. Already in these large devices, however, the data hints at +potentially rich physics. For one, the deepest minimum of σxx occurs at B = 2.4 T [σxx(2.4 T) = +0.008 e2/h] and not at B = 0 [σxx(0 T) = 0.06 e2/h], where the energy gap is largest (see +Supplementary Information Fig. S14 [24]). This is also reflected in the temperature dependence +of G, because T0 is largest (125±8 K) at B = 2 T. One possible interpretation is that the decrease +in σxx away from B = 0 reflects increased scattering of the edge states due to time-reversal +symmetry breaking [45]. Competition between this enhanced scattering and the closing gap causes +the minimum of σxx to occur at finite B. Secondly, VRH hopping does not completely describe the + + +13 +data – at the lowest temperatures, we note that G saturates (see Fig. S13a), possibly indicative of +a second transport path. + + + + +14 +References +[1] +C. L. Kane, and E. J. Mele, Phys. Rev. Lett. 95, 226801 (2005). +[2] +B. A. Bernevig, and S.-C. Zhang, Phys. Rev. Lett. 96, 106802 (2006). +[3] +B. A. Bernevig, T. L. Hughes, and S. C. Zhang, Science 314, 1757 (2006). +[4] +L. Fu, and C. L. Kane, Phys. Rev. Lett. 100, 096407 (2008). +[5] +C. Cao, and J.-H. Chen, Adv. Quantum Technol. 2, 1900026 (2019). +[6] +M. Konig, S. Wiedmann, C. Brune, A. Roth, H. Buhmann, L. W. Molenkamp, X. L. Qi, +and S. C. Zhang, Science 318, 766 (2007). +[7] +A. Roth, C. Brune, H. Buhmann, L. W. Molenkamp, J. Maciejko, X. L. Qi, and S. C. Zhang, +Science 325, 294 (2009). +[8] +L. J. Du, I. Knez, G. Sullivan, and R. R. Du, Phys. Rev. Lett. 114, 096802 (2015). +[9] +S. F. Wu, V. Fatemi, Q. D. Gibson, K. Watanabe, T. Taniguchi, R. J. Cava, and P. Jarillo- +Herrero, Science 359, 76 (2018). +[10] +S. J. Tang, C. F. Zhang, D. Wong, Z. Pedramrazi, H. Z. Tsai, C. J. Jia, B. Moritz, M. +Claassen, H. Ryu, S. Kahn, J. Jiang, H. Yan, M. Hashimoto, D. H. Lu, R. G. Moore, C. C. +Hwang, C. Hwang, Z. Hussain, Y. L. Chen, M. M. Ugeda, Z. Liu, X. M. Xie, T. P. +Devereaux, M. F. Crommie, S. K. Mo, and Z. X. Shen, Nat. Phys. 13, 683 (2017). +[11] +J. Linder, T. Yokoyama, and A. Sudbo, Phys. Rev. B 80, 205401 (2009). +[12] +H.-Z. Lu, W.-Y. Shan, W. Yao, Q. Niu, and S.-Q. Shen, Phys. Rev. B 81, 115407 (2010). +[13] +C. X. Liu, H. Zhang, B. H. Yan, X. L. Qi, T. Frauenheim, X. Dai, Z. Fang, and S. C. Zhang, +Phys. Rev. B 81, 041307 (2010). +[14] +Z. J. Wang, H. M. Weng, Q. S. Wu, X. Dai, and Z. Fang, Phys. Rev. B 88, 125427 (2013). +[15] +W. Y. Shan, H. Z. Lu, and S. Q. Shen, New J. Phys. 12, 043048 (2010). + + +15 +[16] +S. B. Zhang, H. Z. Lu, and S. Q. Shen, Sci. Rep. 5, 13277 (2015). +[17] +S. Borisenko, Q. Gibson, D. Evtushinsky, V. Zabolotnyy, B. Buchner, and R. J. Cava, Phys. +Rev. Lett. 113, 165109 (2014). +[18] +M. Neupane, S. Y. Xu, R. Sankar, N. Alidoust, G. Bian, C. Liu, I. Belopolski, T. R. Chang, +H. T. Jeng, H. Lin, A. Bansil, F. Chou, and M. Z. Hasan, Nat. Comm. 5, 3786 (2014). +[19] +Z. K. Liu, J. Jiang, B. Zhou, Z. J. Wang, Y. Zhang, H. M. Weng, D. Prabhakaran, S.-K. +Mo, H. Peng, P. Dudin, T. Kim, M. Hoesch, Z. Fang, X. Dai, Z. X. Shen, D. L. Feng, Z. +Hussain, and Y. L. Chen, Nat. Mater. 13, 677 (2014). +[20] +M. N. Ali, Q. Gibson, S. Jeon, B. B. Zhou, A. Yazdani, and R. J. Cava, Inorg. Chem. 53, +4062−4067 (2014). +[21] +D. A. Kealhofer, L. Galletti, T. Schumann, A. Suslov, and S. Stemmer, Phys. Rev. X 10, +011050 (2020). +[22] +D. A. Kealhofer, R. Kealhofer, D. Ohara, T. N. Pardue, and S. Stemmer, Sci. Adv. 8, +eabn4479 (2022). +[23] +B. Guo, A. C. Lygo, X. Dai, and S. Stemmer, APL Mater. 10, 091116 (2022). +[24] +See Supplemental Material [link to be inserted by publisher] for the models used to +calculate Fig. 1; details of the film growth, device fabrication and transport measurements; +x-ray characterization of the samples; the measured resistance data from which the +conductivity data shown in the main text was calculated; the gate voltage dependence of +the sheet carrier density and Hall mobility extracted from the low field Hall effect; Landau +level data shown in the main text with guides to the eye; the temperature dependence of G +at ν = 0; and the magnetic field dependence of σxx between the n = 0 Landau levels. The +Supplementary Information also contains Refs. [25,26]. + + +16 +[25] +O. F. Shoron, M. Goyal, B. H. Guo, D. A. Kealhofer, T. Schumann, and S. Stemmer, Adv. +Electron. Mater. 6, 2000676 (2020). +[26] +P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. +Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. +J. Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, I. Polat, +Y. Feng, E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, +E. A. Quintero, C. R. Harris, A. M. Archibald, A. N. H. Ribeiro, F. Pedregosa, P. van +Mulbregt, and SciPy Contributors, Nat. Methods 17, 261 (2020). +[27] +E. Y. Ma, M. R. Calvo, J. Wang, B. Lian, M. Muhlbauer, C. Brune, Y. T. Cui, K. J. Lai, +W. Kundhikanjana, Y. L. Yang, M. Baenninger, M. Konig, C. Ames, H. Buhmann, P. +Leubner, L. W. Molenkamp, S. C. Zhang, D. Goldhaber-Gordon, M. A. Kelly, and Z. X. +Shen, Nat. Comm. 6, 7252 (2015). +[28] +F. Nichele, H. J. Suominen, M. Kjaergaard, C. M. Marcus, E. Sajadi, J. A. Folk, F. M. Qu, +A. J. A. Beukman, F. K. de Vries, J. van Veen, S. Nadj-Perge, L. P. Kouwenhoven, B. M. +Nguyen, A. A. Kiselev, W. Yi, M. Sokolich, M. J. Manfra, E. M. Spanton, and K. A. Moler, +New J. Phys. 18, 083005 (2016). +[29] +J. Maciejko, X. L. Qi, and S. C. Zhang, Phys. Rev. B 82, 155310 (2010). +[30] +G. Tkachov, and E. M. Hankiewicz, Phys. Rev. Lett. 104, 166803 (2010). +[31] +K. Bendias, S. Shamim, O. Herrmann, A. Budewitz, P. Shekhar, P. Leubner, J. Kleinlein, +E. Bocquillon, H. Buhmann, and L. W. Molenkamp, Nano Lett. 18, 4831 (2018). +[32] +T. Morimoto, A. Furusaki, and N. Nagaosa, Phys. Rev. Lett. 114, 146803 (2015). +[33] +R. Yoshimi, A. Tsukazaki, Y. Kozuka, J. Falson, K. S. Takahashi, J. G. Checkelsky, N. +Nagaosa, M. Kawasaki, and Y. Tokura, Nat. Commun. 6, 6627 (2015). + + +17 +[34] +Y. Xu, I. Miotkowski, and Y. P. Chen, Nat. Comm. 7, 11434 (2016). +[35] +S. K. Chong, K. B. Han, T. D. Sparks, and V. V. Deshpande, Phys. Rev. Lett. 123, 036804 +(2019). +[36] +J. Ziegler, D. A. Kozlov, N. N. Mikhailov, S. Dvoretsky, and D. Weiss, Phys. Rev. Res. 2, +033003 (2020). +[37] +S. Shamim, P. Shekhar, W. Beugeling, J. Böttcher, A. Budewitz, J.-B. Mayer, L. Lunczer, +E. M. Hankiewicz, H. Buhmann, and L. W. Molenkamp, Nat. Comm. 13, 2682 (2022). +[38] +W. Beugeling, Phys. Rev. B 104, 115428 (2021). +[39] +D. A. Kealhofer, M. Goyal, T. N. Pardue, and S. Stemmer, Phys. Rev. B 104, 035435 +(2021). +[40] +A. M. Kadykov, S. S. Krishtopenko, B. Jouault, W. Desrat, W. Knap, S. Ruffenach, C. +Consejo, J. Torres, S. V. Morozov, N. N. Mikhailov, S. A. Dvoretskii, and F. Teppe, Phys. +Rev. Lett. 120, 086401 (2018). +[41] +Y. Zeng, F. Xue, and A. H. MacDonald, Phys. Rev. B 105, 125102 (2022). +[42] +S. S. Murzin, and A. G. M. Jansen, Physica E 43, 1576 (2011). +[43] +L. Lunczer, P. Leubner, M. Endres, V. L. Müller, C. Brüne, H. Buhmann, and L. W. +Molenkamp, Phys. Rev. Lett. 123, 047701 (2019). +[44] +G. M. Gusev, Z. D. Kvon, E. B. Olshanetsky, and N. N. Mikhailov, Solid State Comm. +302, 113701 (2019). +[45] +P. Delplace, J. Li, and M. Buttiker, Phys. Rev. Lett. 109, 246803 (2012). + + + + +18 +Figure Captions +Figure 1: Characteristic Landau level spectra of different topological phases in thin films. (a) 2D +TI, (b) 2D trivial insulator, (c) hybridized 3D TI with SIA, and (d) 3D TI with SIA. The labels +denote quantum Hall filling factors. In (a), (b) and (c) electron-like (hole-like) levels are show in +orange (blue) and in (d) Landau levels from the higher energy surface are shown in purple and +those from the other surface are shown in green. In all, the n = 0 Landau levels are drawn with +heavier line weight. All four phases can produce a ν = 0 quantum Hall state (a 3D TI without SIA +has degenerate n = 0 Landau levels and there is no ν = 0 quantum Hall state). See Supplementary +Materials for details of the calculations. +Figure 2: Landau levels and quantum Hall effect of sample 1 (20 nm film). Magnetic field (B) +and gate voltage (Vg) dependence of the (a) longitudinal (σxx) and (b) Hall (σxy) conductivities of +sample 1. The labels in (a) denote the corresponding quantum Hall filling factors in (b). The black +arrows mark additional Landau levels from a higher energy subband. +Figure 3: Magnetic field (B) and gate voltage (Vg) dependence of the longitudinal (σxx) +conductivity of (a) 18 nm, (b) 19 nm, and (c) 22 nm films. +Figure 4: Landau levels and quantum Hall effect of a 14 nm film. Shown are the B and Vg +dependence of σxx (a) and σxy (b) of sample. Labels in (a) denote quantum Hall filling factors +obtained from (b). + + + + + + +19 +Figures with captions + +Figure 1: Characteristic Landau level spectra of different topological phases in thin films. (a) 2D +TI, (b) 2D trivial insulator, (c) hybridized 3D TI with SIA, and (d) 3D TI with SIA. The labels +denote quantum Hall filling factors. In (a), (b) and (c) electron-like (hole-like) levels are show in +orange (blue) and in (d) Landau levels from the higher energy surface are shown in purple and +those from the other surface are shown in green. In all, the n = 0 Landau levels are drawn with +heavier line weight. All four phases can produce a ν = 0 quantum Hall state (a 3D TI without SIA +has degenerate n = 0 Landau levels and there is no ν = 0 quantum Hall state). See Supplementary +Information for details of the calculations. + + + +20 + +Figure 2: Landau levels and quantum Hall effect of sample 1 (20 nm film). Magnetic field (B) +and gate voltage (Vg) dependence of the (a) longitudinal (σxx) and (b) Hall (σxy) conductivities of +sample 1. The labels in (a) denote the corresponding quantum Hall filling factors in (b). The black +arrows mark additional Landau levels from a higher energy subband. + +Figure 3: Magnetic field (B) and gate voltage (Vg) dependence of the longitudinal (σxx) +conductivity of (a) 18 nm, (b) 19 nm, and (c) 22 nm films. +−4 +−2 +0 +2 +4 +Vg (V) +0 +2 +4 +6 +8 +10 +12 +14 +B (T) +0 +5 +σxx (e2/h) +(a) +− 4 +− 2 +0 +2 +4 +V g (V ) +0 +2 +4 +6 +8 +10 +12 +14 +B (T ) +0 +5 +σ +xx (e2/h) +(b) +−4 +−2 +0 +2 +4 +Vg (V) +0 +2 +4 +6 +8 +10 +12 +14 +B (T) +0 +5 +σxx (e2/h) +(c) +ν = -1 +0 +2 3 +8 +4 +0 +1 +ν = -1 +2 +3 +8 +5 +0 +1 +0 +4 +ν = -1 +2 3 +6 +1 +0 +3 +4 +5 + + +21 + +Figure 4: Landau levels and quantum Hall effect of a 14 nm film. Shown are the B and Vg +dependence of σxx (a) and σxy (b) of sample. Labels in (a) denote quantum Hall filling factors +obtained from (b). + +g +σ +xx (e2/h) +ν = -1 +2 +4 +0 +1 +(a) +g +σ +xy (e2/h) +(b) + +Supplementary Information + +Two-dimensional topological insulator state in cadmium arsenide thin films +Alexander C. Lygo, Binghao Guo, Arman Rashidi, Victor Huang, Pablo Cuadros-Romero, +and Susanne Stemmer +Materials Department, University of California, Santa Barbara, California 93106-5050, USA + + +Contents +Model of a hybridized 3D TI ........................................................................................................................................2 +Sample growth ...............................................................................................................................................................3 +Device fabrication .........................................................................................................................................................3 +Transport measurements .............................................................................................................................................4 +X-ray characterization ..................................................................................................................................................4 +Resistance data ..............................................................................................................................................................7 +Gate voltage dependence of sheet carrier density and Hall mobility .......................................................................9 +Field dependence of 𝝂 = 𝟎 ............................................................................................................................................9 +𝝈𝒙𝒙 of the 20 nm sample and the 14 nm sample with guides to the eye ..................................................................9 +Temperature dependence of G at ν = 0 .....................................................................................................................10 +Magnetic field dependence of σxx between the n = 0 Landau levels ........................................................................11 +References ....................................................................................................................................................................11 + + + + +Model of a hybridized 3D TI +A two-dimensional topological insulator state can be realized in thin films of 3D topological insulators +when the film thickness, L, is reduced such that there is a spatial overlap of the surface state wavefunctions. +Starting from the bulk Hamiltonian of a 3D TI, the effective Hamiltonian in the thin film limit was derived +in Ref. [1], the result being: +𝐻!"" = #ℎ#(𝑘) +𝑉$𝐼%×% +𝑉$𝐼%×% +ℎ'(𝑘)*. + + + + +(S1) +Here, ℎ±(𝑘) = 𝐸$ − 𝐷𝑘% − ℏ𝑣)0𝑘*𝜎+ + 𝑘+𝜎*3 ± 5 +, +% − 𝑀𝑘%7 𝜎-, where 𝜎. are Pauli matrices in the basis +of spin-up and spin-down, 𝑘% = 𝑘*% + 𝑘+%, 𝑣) is the Fermi velocity, and 𝐸$, 𝐷, Δ, and 𝑀 are material specific +band parameters. In the presence of structural inversion asymmetry (SIA), 2𝑉$ is the potential difference +between the top and bottom surface of the thin film. If 𝑉$ = 0 (no SIA), 𝐻!"" is simply the Benevig- +Hughes-Zhang Hamiltonian [2]. The condition Δ𝑀 > 0 describes a 2D TI. The term 5 +, +% − 𝑀𝑘%7 is a mass +term due to hybridization and the parameters Δ and 𝑀 depend on L and approach zero in the large L limit, +returning the system to a 3D TI. +Landau level energies of a hybridized 3D TI were derived in Ref. [3]. Here we briefly summarize their +results. With the magnetic field applied normal to the plane of the thin film, the Landau level energies are +obtained from 𝐻!"" by first making the substitution, 𝑘<⃗ → −𝑖∇ + 𝑒𝐴⃗/ℏ, where e is the elementary change +and ℏ the reduced Plank’s constant. Under the Landau gauge 𝐴⃗ = 𝐵𝑦𝑥G. Using ladder operators 𝑎 = +/! +√% 0𝑘* − 𝜕+ − 𝑦/𝑙1 +%3 and 𝑎2 = +/! +√% 0𝑘* + 𝜕+ − 𝑦/𝑙1 +%3, where 𝑙1 = Kℏ/𝑒𝐵 is the magnetic length and +defining 𝜔 = 2𝑀/𝑙3 +%, the effective Hamiltonian in a magnetic field reads: +𝐻!""(𝐵) = +⎝ +⎜ +⎜⎜ +⎜ +⎛ +# +$ − 𝜔 /𝑎%𝑎 + +& +$2 +𝑖 √$ℏ)! +*" +𝑎 +𝑉+ +0 +−𝑖 √$ℏ)! +*" +𝑎% +− +# +$ + 𝜔 /𝑎%𝑎 + +& +$2 +0 +𝑉+ +𝑉+ +0 +− +# +$ + 𝜔 /𝑎%𝑎 + +& +$2 +𝑖 √$ℏ)! +*" +𝑎 +0 +𝑉+ +−𝑖 √$ℏ)! +*" +𝑎% +# +$ − 𝜔 /𝑎%𝑎 + +& +$2⎠ +⎟ +⎟⎟ +⎟ +⎞ +. (S2) +The term 𝐸$ − 𝐷𝑘% is dropped from ℎ±(𝑘) as it does not significantly influence the Landau level energies. +Taking the eigenstates, |𝑛⟩, of the quantum harmonic oscillator as a basis, the trial solutions for n = 0 are: +𝜓+ = : +0 +𝜙$,+⟨𝑦|0⟩ +0⟩ +𝜙-,+⟨𝑦|0⟩ +@, + + + + +(S3) +and for n > 0 they are: +𝜓. = +⎝ +⎜ +⎛ +𝜙&,.⟨𝑦|𝑛 − 1⟩ +𝜙$,.⟨𝑦|𝑛⟩ +𝜙/,.⟨𝑦|𝑛 − 1⟩ +𝜙-,.⟨𝑦|𝑛⟩ +⎠ +⎟ +⎞, + + + +(S4) +where 𝜙.,5 = 𝐶.,5/K𝐿*𝑒.6" with 𝐶.,5 being constants and +⟨𝑦|𝑛⟩ = +7 +85!%#/!√: +exp 5− +(+'+$)% +%/! +% +7 ℋ5 5 +(+'+$) +/! +7, + + +(S5) + +where ℋ5 are the Hermite polynomials and 𝑦$ is the guiding center of the wave packet. Using Eqs. S2, +S3, and S4, the energies of the Landau levels for n = 0 were determined to be: +𝐸+ +± = ±E/− +# +$ + +1 +$2 +$ ++ 𝑉+ +$, + + + +(S6) +and for n > 0 the energies are: +𝐸.,2 +± = ±FG𝜀. + 𝑠J/ +1 +$2 +$ ++ 𝑉+ +$ /√$.ℏ)! +*" +2 +$ ++ 𝑉+ +$ K +.13# +$ +4% L +$ +M +$ + + +(S7) +where 𝜀5 = Z𝑛 5√%ℏ>& +/! 7 +% ++ 5 +, +% − 𝜔7 +% +. For the Landau level spectra show in the Fig. 1 of the main text we +have used 𝑣) = 5 × 10? m/s (a realistic value for Cd3As2 thin films [4]) throughout. For the spectrum of a +2D TI (Fig. 1a), Δ=-20 meV, 𝑀=-800 meVnm2, and 𝑉$ = 0. For the spectrum of a 2D trivial insulator +without SIA (Fig. 1b), Δ=-20 meV, 𝑀 = 800 meVnm2, and 𝑉$ = 0. For the spectrum of a hybridized 3D +TI with SIA (Fig. 1c) we used Δ = −20 meV, 𝑀 = −900 meVnm2 and 𝑉$ = 10 meV. For the spectrum +of a 3D TI with SIA (Fig. 1d) we used Δ = 0, 𝑀 = 0, and 𝑉$ = 10 meV. These values are chosen for +illustrative purposes only, as realistic model parameters for Cd3As2 thin films on a substrate are not known. +Sample growth +The Cd3As2 thin films discussed in the main text were grown by molecular beam epitaxy (MBE). Samples +were grown on epi-ready undoped (001) GaSb substrate offcut 1.5° or 3° towards (111)B. The substrates +were annealed for 12 hours at 150 °C in a high-vacuum load lock chamber. The substrate’s native oxide +was removed in an ultra-high vacuum (UHV) preparation chamber using atomic hydrogen etching for +1 hour at a substrate temperature of 500 °C, as measured by the heater thermocouple or by thermal +desorption under Sb2 flux in the UHV growth chamber. In the following, all temperatures were measured +using an optical pyrometer, unless otherwise stated. The buffer layer growth consisted of: 100 nm of GaSb +grown at approximately 490 °C to smooth the surface followed by 500 nm or 1 μm of Al0.45In0.55Sb grown +between 390 °C and 410 °C. Afterwards, the substrate was cooled to a substrate heater thermocouple +temperature between 120 °C 170 °C to grow Cd3As2. To protect the Cd3As2 film surface, 3 nm of GaSb +was grown as a capping layer at the Cd3As2 growth temperature. +Device fabrication +Eight-arm Hall bars were patterned using standard photolithographic techniques. Mesas were isolated by +argon ion beam dry etching. Sample 1 Hall bars were contacted by sputtered Au/Ti (200/50 nm) leads or +electron-beam evaporated Au/Pt/Ti (200/20/20 nm) leads. For top gates, ~25 nm AlOx, grown by atomic +layer deposition (ALD) using a 120 °C process, served as the dielectric. The dielectric thickness was +determined using ellipsometry performed on a Si spectator chip from the same processing run. Immediately +prior to AlOx deposition, the entire chip was exposed to a brief low energy oxygen plasma. The gate +electrodes were deposited by thermal evaporation of Au/Ni (200/50 nm). The gate electrode coved the top +and sides of the channel. Fig. S1 shows an optical micrograph of Hall bar fabricated from the 20 nm sample. +The labels denote the contact configuration used for 4-probe measurements. + + +Fig. S1. Optical micrograph Hall bar fabricated from sample 1. The Cd3As2 appears reddish- +pink. The labels denote the contact configuration used for 4-probe measurements. +Transport measurements +All transport measurements were performed in a Quantum Design PPMS with a base temperature T = 1.8 K, +in magnetic fields B up to 14 T. Temperature dependent measurements of the 20 nm were performed from +1.8 K to 40 K. Standard lock-in detection techniques were used for all measurements. Except for the 14 nm +sample, 4-probe measurements were performed by applying a 10 mV AC (17.77 Hz) voltage to a ~10 MW +resistor in series with the devices. The resulting current (0.15-1 nA) was measured using a Stanford +Research Systems (SRS) SR830 lock-in amplifier. Differential voltages (Vxx and Vxy) were measured using +SRS SR560 voltage preamplifiers with a gain of either 100 or 500 and the signal was band-pass filtered +with cutoff frequencies at 10 Hz and 30 Hz and a 6 dB/oct. rolloff. The outputs of the preamplifiers were +recorded using SRS SR830 lock-in amplifiers. 2-probe conductance measurements were performed by +applying an AC (17.77 Hz) voltage (≤ 200 mV) across adjacent contacts along the same edge. The bias +voltage was measured using a SR560 with a gain of 100 and the same filter settings as described above. +The current through the device was measured using an Ithaco model 1211 transimpedance amplifier with a +gain of 10-7 A/V. +Four-probe measurements of the 14 nm sample were performed similarly expect that a 100 kW resistor was +placed in parallel with the device to minimize Joule heating while measuring the highly resistive state +around charge neutrality. Due to the highly insulating state around charge neutrality, measurements of +sample 2 where limited to magnetic fields up to 9 T. For all samples, gate voltage, Vg, was applied relative +to circuit ground and was stepped in 2 mV or 5 mV increments using a Keithley 2400 source meter. The +magnetic field was stepped in 200 mT (150 mT for the 14 nm sample) increments. For all data sets, readings +taken at the same Vg value were averaged before plotting. Longitudinal (sxx) and transverse conductivity +(sxy) were obtain by tensor inversion as: +𝜎** = +@"" +@"" +% #@"' +% + + + + +(S8) +𝜎*+ = +'@"' +@"" +% #@"' +% + + + + +(S9) + +X-ray characterization +X-ray diffraction (XRD) and reflection (XRR) measurements were performed in triple-axis geometry on a +Rigaku SmartLab diffractometer equipped with a HyPix3000 detector. The incident optics included a 1 mm +physical slit and a Ge (220) 2-bounce monochromator to select Cu Kα. Figure S2(a) shows x-ray reflectivity +measurements for all samples. The sample thickness were determined by fitting the reflectivity data with +a model that included all layers of the heterostructures. Figure S2(b) shows 2θ/ω scans performed after +alignment to the GaSb substrate 004 reflection. At room temperature, because of slight difference in growth +conditions of the Al0.45In0.55Sb buffer layer, the measured Cd3As2 0016 reflection for the 14 nm and 22 nm +Vxx +Vxy +I+ +I- + +50μmsamples is shifted to lower angle (concurrently, the Al0.45In0.55Sb reflection is shifted to a higher angle) than +that of the 18 nm, 19 nm, and 20 nm. Figure S3 and S4 show a reciprocal space map (RSMs) taken around +the GaSb 224 reflection for 20 nm and 14 nm samples, respectively. Tables SI and SII report the lattice +constants, obtained from the RSMs, of the Cd3As2 film and buffer layers for the 20 nm sample and 14 nm +sample, respectively. Peak positions were determined by finding the maximum intensity of each reflection +after Gaussian filtering with a standard deviation parameter of 1.5. Uncertainties on the lattice constants +were estimated by propagating the uncertainties of the peak position in the standard way. Uncertainties of +the peak position were taken to equal the step size of the measurement. At room temperature, the 20 nm +sample and 14 nm sample are under small compressive strain (-0.54% and -0.75%, respectively) relative to +bulk crystals [5]. + + +Fig. S2. X-ray reflectivity data (a) and out-of-plane 2θ/ω x-ray diffraction +patterns (b) of the samples discussed in the manuscript. + log(relative intensity) +2.5 +2.0 +1.5 +1.0 +0.5 +2θ (°) +14 nm +18 nm +19 nm +20 nm +22 nm + log(relative intensity) +62 +61 +60 +59 +58 +57 +56 +2θ (°) +Cd3As2 (0016) +GaSb (004) +Al0.45In0.55Sb (004) +(a) +(b) + + +Fig. S3. Reciprocal space map taken around the GaSb 224 reflection of 20 nm sample. +The dashed white line shows the cubic condition (a = c or q[110] = √2 q[100]) and the +solid white line marks the q[110] position of the Al0.45In0.55Sb buffer layer. + + + +Fig. S4. Reciprocal space map taken around the GaSb (224) reflection for the 14 nm +sample. The dashed white line shows the cubic condition (a = c or q[110] = √2 q[100]) +and the solid white line marks the q[110] position of the Al0.45In0.55Sb buffer layer. + +Al0.45In0.55Sb 224 +GaSb 224 +Cd3As2 44 16 + +4.15 +4.10 +Log (Intensity) +(A-1) +4.05 +4.00 +3.95 +3.90 +2.75 +2.80 +2.85 +2.90 +2.95 +(A-1)4.15 +GaSb 224 +4.10- +(A-1) +Log (Intensity) +4.05 +4.00 +3.95 +Cd3As2 44 16 +3.90 +2.75 +2.80 +2.85 +2.90 +2.95 +(A-1) +//[1101Table SI. Lattice constants of Cd3As2 and Al0.45In0.55Sb obtained from the RSM in Fig. S3. +Layer +a (Å) +c (Å) +Cd3As2 +12.565±0.001 +25.575±0.007 +Al0.45In0.55Sb +6.280±0.001 +6.319±0.002 + +Table SII. Lattice constants of Cd3As2 and Al0.45In0.55Sb obtained from the RSM in Fig. S4. +Layer +a (Å) +c (Å) +Cd3As2 +12.538±0.001 +25.62±0.01 +Al0.45In0.55Sb +6.265±0.001 +6.309±0.003 + +Resistance data +Figures S5, S6, S7, S8, and S9 show the longitudinal resistance (Rxx) and Hall resistances (Rxy) used to +compute longitudinal (sxx) and Hall (sxy) conductivities for the samples, respectively. + + +Fig. S5. Longitudinal resistance, Rxx, (a,b) and Hall resistance, Rxy (c) of the 20 nm sample. Traces in (b) are +offset by 1 kΩ. + + +Fig. S6. Longitudinal resistance, Rxx, (a,b) and Hall resistance, Rxy (c) of the 18 nm sample. Traces in (b) are +offset by 1 kΩ. +(a) +(b) +−4 +−2 +0 +2 +4 +Vg (V) +−1.0 +−0.5 +0.0 +0.5 +1.0 +Rxy (h/e2) +0.4 +14 +B (T) +(c) +9g (9) +−4 +−2 +0 +2 +4 +B(T) +3 6 912 +5xx (0Ω) +0 +1 +2 +3 +4 +5 +(a) +−4 +−2 +0 +2 +4 +Vg (V) +0 +20 +40 +60 +80 +100 +Rxx (kΩ) +0 +14 +B (T) +(b) +−4 +−2 +0 +2 +4 +Vg (V) +−1.0 +−0.5 +0.0 +0.5 +1.0 +Rxy (h/e2) +0.2 +14 +B (T) +(c) + + + +Fig. S7. Longitudinal resistance, Rxx, (a,b) and Hall resistance, Rxy (c) of the 19 nm sample. Traces in (b) are +offset by 1 kΩ. + + +Fig. S8. Longitudinal resistance, Rxx, (a,b) and Hall resistance, Rxy (c) of the 22 nm sample. Traces in (b) are +offset by 1 kΩ. + + +Fig. S9. Longitudinal resistance, Rxx, (a,b) and Hall resistance, Rxy (c) of the 14 nm sample. + +9g (9) +−4 +−2 +0 +2 +4 +B(T) +3 6 912 +5xx (0Ω) +0 +1 +2 +3 +4 +5 +(a) +−4 +−2 +0 +2 +4 +Vg (V) +0 +20 +40 +60 +80 +100 +Rxx (kΩ) +0 +14 +B (T) +(b) +−4 +−2 +0 +2 +4 +Vg (V) +−1.0 +−0.5 +0.0 +0.5 +1.0 +Rxy (h/e2) +0.2 +14 +B (T) +(c) +9g (9) +−4 +−2 +0 +2 +4 +B(T) +3 6 912 +5xx (0Ω) +0 +1 +2 +3 +4 +5 +(a) +−4 +−2 +0 +2 +4 +Vg (V) +0 +20 +40 +60 +80 +100 +Rxx (kΩ) +0 +14 +B (T) +(b) +−4 +−2 +0 +2 +4 +Vg (V) +−1.0 +−0.5 +0.0 +0.5 +1.0 +Rxy (h/e2) +0.2 +14 +B (T) +(c) + +(a) +(b) +100 +B (T) +1.0 +80 +0.15 +9 +0.5 +(uy) +60 +(h/e2) +Rxx +0.0 +40 +0.5 +20 +B (T) +-1.0 +0.15 +9 +0 +-2 +0 +2 +4 +-4 +-2 +0 +2 +4 +Vg (V) +Vg (V)Gate voltage dependence of sheet carrier density and Hall mobility +Figure S10 shows the gate voltage, Vg, dependence of the sheet carrier density, n2D, and the Hall mobility +of the 20 nm sample. Capacitance density of the device was determined from the slope of the n2D versus +Vg. There is a distinct change in slope at Vg = 1 V, the approximate voltage where an additional subband +appears [see main text as well as Fig. S12(a)], reflecting the influence of the quantum capacitance with the +change in density of states. Below Vg = 1 V the capacitance density is 62 nF/cm2 and above Vg = 1 V the +capacitance density is 131 nF/cm2. Assuming a dielectric constant of the 25 nm thick Al2O3 gate dielectric +of 9, the geometrical capacitance is 320 nF/cm2. The discrepancy is due to the influence of the quantum +capacitance in series with the gate dielectric. + +Fig. S10. Sheet carrier density, n2D, (a) and Hall mobility (b) of the 20 nm sample. The +red and blue lines in a are linear fits used to determine the capacitance density. + +Field dependence of 𝝂 = 𝟎 +Figure S11 shows 𝜎*+ of sample 1 at 4 T, 9.6 T, and 14 T around 𝜈 = 0. For 4 T and 14 T there is a clear +plateau at 𝜎*+ = 0, whereas at 9.6 T 𝜎*+ changes sign smoothly between the 𝜈 = 1 and 𝜈 = −1 plateaus. + +Fig. S11. 𝜎56 of sample 1 at 4 T, 9.6 T, and 14 T around 𝜈 = 0. +𝝈𝒙𝒙 of the 20 nm sample and the 14 nm sample with guides to the eye +Fig. S12 shows 𝜎** for sample 1 (a) and sample 2 (b) with lines overlayed to highlight the Landau levels. +The lines serve as guides of the positions of the Landau levels only (they are not fits), as they neglect the +−1 +0 +1 +2 +3 +4 +Vg (V) +0 +1 +2 +3 +4 +n2D (1012 cm−2) +(a) +−1 +0 +1 +2 +3 +4 +Vg (V) +5000 +10000 +15000 +20000 +0obility (cm2/VV) +(b) +62 nF/cm2 +131 nF/cm2 +−3.2 +−2.4 +−1.6 +−0.8 +9g (9) +−1.5 +−1.0 +−0.5 +0.0 +0.5 +1.0 +1.5 +σxy (e2/h) +4 T +9.6 T +14 T + +√𝐵 dispersion of the 𝑛 > 0 Landau levels. The data show in Fig. S6 is the same as that in Fig. 2(a) and +Fig. 4(a) of the main text. + + +Fig. S12. 𝜎55 of sample 1 (a) and sample 2 (b). The pink dashed lines are guides to eye for the Landau levels +originating from the states nearest to charge neutrality and the green dashed lines in (a) are guides to the eye +for the two Landau levels originating from a higher energy subband. +Temperature dependence of G at ν = 0 +Figure S13 shows temperature dependent two-point conductance of the 20 nm sample. Fitting of the +temperature dependence of G, shown in Fig. 3 in the main text, was done using the non-linear least squares +method as implemented in the SciPy Python package [6]. Uncertainties on the fitted parameters are taken +to be equal to the square root of the diagonal elements of the covariance matrix. Tables SII and SIII show +the results for the fit parameters. + +Fig. S13. Temperature dependent conductance (G) of the 20 sampleat different magnetic fields. The dashed lines +in (a) and (c) are fits to the 2D Mott variable range hopping equation and Arrhenius equation, respectively. The +dashed lines in (b) are guides to the eye highlighting the range of T-linear conductance. + +−4 +−2 +0 +2 +4 +9g (9) +0.0 +1.5 +3.0 +4.5 +6.0 +7.5 +9.0 +B (7) +0 +8 +σxx (e2/h) +ν = -1 +2 +4 +0 +1 +(b) +−4 +−2 +0 +2 +4 +Vg (V) +0 +2 +4 +6 +8 +10 +12 +14 +B (T) +0 +5 +σxx (e2/h) +ν = -1 +4 +0 +3 +2 +5 +8 +6 +0 +1 +(a) +0 +10 +20 +30 +40 +7emperture (.) +0.05 +0.10 +0.15 +0.20 +0.25 +G (e2/h) +0 7 +1 7 +2 7 +3 7 +4 7 +5 7 +6 7 +7 7 +(a) +0 +10 +20 +30 +40 +Temperature (.) +0.10 +0.15 +0.20 +0.25 +G (e2/h) +8 T +9 T +10 T +11 T +12 T +(b) +0 +10 +20 +30 +40 +Temperture (.) +0.04 +0.08 +0.12 +0.16 +G (e2/h) +13 T +14 T +(c) + + +Table SIII. Parameters obtained from fitting the minimum 2-point conductance (G) at n = 0 versus temperature to +the 2D Mott variable range hopping model, 𝐺(𝑇) = 𝐺+expW−(𝑇+/𝑇)&//Y. The experimental data and fits are shown +in Fig. S13(a). +Magnetic Field (T) +G0 (e2/h) +T0 (K) +0 +0.70±0.04 +58±7 +1 +0.75±0.03 +82±6 +2 +0.81±0.03 +125±8 +3 +0.74±0.03 +112±7 +4 +0.64±0.02 +78±5 +5 +0.50±0.02 +40±3 +6 +0.74±0.01 +11±1 +7 +0.264±0.003 +1.2±0.1 + +Table SIV. Parameters obtained from fitting the minimum 2-point conductance at n = 0 versus temperature to the +Arrhenius equation, 𝐺(𝑇) = 𝐺+exp[−𝐸8/𝑇]. The experimental data and fits are shown in Fig. S13(c). +Magnetic Field (T) +G0 (e2/h) +Ea (K) +13 +0.158±0.002 +4.8±0.2 +14 +0.143±0.005 +10.3±0.8 +Magnetic field dependence of σxx between the n = 0 Landau levels +Figure S14 shows the conductivity between the n = 0 Landau levels versus magnetic field. For B >11 T +and B < 7.8 T, the n = 0 are well separated and there is a clear minimum. Around the crossing of the n = 0 +Landau levels (11 T ³ B ³ 7.8 T) there is no gap between the n = 0 Landau levels and the maximum +conductivity is plotted instead. + +Fig. S14. Conductivity of sample 1 between the n = 0 Landau levels versus magnetic field. + +References +[1] +H.-Z. Lu, W.-Y. Shan, W. Yao, Q. Niu, and S.-Q. Shen, Phys. Rev. B 81, 115407 (2010). +[2] +B. A. Bernevig, T. L. Hughes, and S. C. Zhang, Science 314, 1757 (2006). +[3] +S. B. Zhang, H. Z. Lu, and S. Q. Shen, Sci. Rep. 5, 13277 (2015). +[4] +O. F. Shoron, M. Goyal, B. H. Guo, D. A. Kealhofer, T. Schumann, and S. Stemmer, Adv. Electron. +Mater. 6, 2000676 (2020). +0 +3 +6 +9 +12 +15 +B (T) +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +σxx (e2/h) +σxx minima +σxx maxima + +[5] +M. N. Ali, Q. Gibson, S. Jeon, B. B. Zhou, A. Yazdani, and R. J. Cava, Inorg. Chem. 53, 4062−4067 +(2014). +[6] +P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, +P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman, N. +Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, I. Polat, Y. Feng, E. W. Moore, +J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero, C. R. Harris, A. +M. Archibald, A. N. H. Ribeiro, F. Pedregosa, P. van Mulbregt, and SciPy Contributors, Nat. +Methods 17, 261 (2020). + + diff --git a/f9E0T4oBgHgl3EQf6AK-/content/tmp_files/load_file.txt b/f9E0T4oBgHgl3EQf6AK-/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..2f7855ff89b5b553cbc8394580edc47ca242c425 --- /dev/null +++ b/f9E0T4oBgHgl3EQf6AK-/content/tmp_files/load_file.txt @@ -0,0 +1,2063 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf,len=2062 +page_content='1 Two-dimensional topological insulator state in cadmium arsenide thin films Alexander C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lygo, Binghao Guo, Arman Rashidi, Victor Huang, Pablo Cuadros-Romero and Susanne Stemmera) Materials Department, University of California, Santa Barbara, California 93106-5050, USA a) Corresponding author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Email: stemmer@mrl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='ucsb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='edu 2 Abstract Two-dimensional topological insulators (2D TIs) are a highly desired quantum phase but few materials have demonstrated clear signatures of a 2D TI state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' It has been predicted that 2D TIs can be created from thin films of three-dimensional TIs by reducing the film thickness until the surface states hybridize.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Here, we employ this technique to report the first observation of a 2D TI state in epitaxial thin films of cadmium arsenide, a prototype Dirac semimetal in bulk form and a 3D TI in thin films.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Using magnetotransport measurements with electrostatic gating, we observe a Landau level spectrum and quantum Hall effect that are in excellent agreement with those of an ideal 2D TI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Specifically, we observe a crossing of the zeroth Landau levels at a critical magnetic field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' We show that the film thickness can be used to tune the critical magnetic field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Moreover, a larger change in film thickness causes a transition from a 2D TI to a 2D trivial insulator, just as predicted by theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The high degree of tunability available in epitaxial cadmium arsenide heterostructures can thus be used to fine-tune the 2D TI, which is essential for future topological devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 3 Two-dimensional topological insulators (2D TIs) are a highly sought-after phase owing to their spin-polarized counterpropagating (helical) edge states, which are of great interest for their ability to host novel physical phenomena, such as the quantum spin Hall effect [1-3], and for topologically protected quantum computing [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 2D TIs are defined as possessing an inverted (relative to a normal semiconductor band structure) and gapped band structure which together, by the bulk-boundary correspondence, give rise to ℤ 2 topological order and novel edge states [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Despite intensive research and a number of theoretically proposed systems [5], experimentally confirmed 2D TIs are, however, very rare.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Materials for which experimental signatures indicative of a 2D TI state have been reported include quantum well structures [6-8] and monolayer van der Waals compounds [9, 10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' An alternative, and largely unexplored, route to a 2D TI is thin films of three-dimensional (3D) TIs whose film thickness is reduced such that there is a spatial overlap of the surface state wavefunctions [11-13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' In this case, the 3D TI’s surface states gap out, forming degenerate massive Dirac states, and the material is a 2D TI if there is inversion between the confinement induced electron-like and hole-like subbands [13, 14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' An additional requirement is the absence of a strong potential difference between the top and bottom surfaces of the film (so called structural inversion asymmetry or SIA) as this introduces additional coupling between the massive Dirac states that, if strong enough, destroys the 2D TI phase [13, 15, 16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Furthermore, as a function of film thickness, it is predicted that changes in the subband ordering can cause transitions between a 2D TI and 2D trivial insulator states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The thicknesses when these transitions are expected to occur depend sensitively on band parameters and microscopic details of the system [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Hence, it is of interest to explore high quality thin films of 3D TIs prepared with controlled thicknesses and negligible SIA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 4 In this study, we employ the confinement approach to epitaxial thin films of cadmium arsenide (Cd3As2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' While Cd3As2 is a prototype 3D Dirac semimetal with large band inversion in bulk [14, 17-20], it is a nearly ideal 3D TI in thin films [21, 22] with high surface state mobility and no parasitic bulk conduction at low temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' As the Cd3As2 film thickness is further reduced, theory predicts a transition to a 2D TI (quantum spin Hall insulator) with a wide energy gap [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Recently, we reported evidence of surface state hybridization in a 20 nm-thin Cd3As2 thin film [23], an essential step towards a 2D TI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Here, we present the first experimental evidence of a 2D TI state in (001)-oriented Cd3As2 films.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' To this end, we use Landau level spectroscopy, which, as we will discuss next, can unambiguously identify surface state hybridization and the inversion of the bands from the behavior of the zeroth Landau levels in high-quality films, as well as other essential details, such as SIA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Moreover, just as predicted by the theoretical models, we show that a small (6 nm) change in film thickness causes a transition from a 2D TI to a 2D trivial insulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A hallmark of a 2D TI in a perpendicular magnetic field (B) is a crossing of two zero- energy (n = 0) Landau levels at a critical field (Bc), as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1(a), and a zero conductance (ν = 0) quantum Hall plateau when the chemical potential lies in the energy gap between the n = 0 Landau levels (after refs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [15, 16];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' for details of the calculations, see the Supplementary Materials [24]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Because of band inversion, one of the two n = 0 Landau levels is electron-like and originates from the valence band and the other is hole-like and originates from the conduction band.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' At Bc, the system undergoes a phase transition from a nontrivial insulator to a trivial one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Concurrently, the ν = 0 quantum Hall plateau disappears but then reemerges for B > Bc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The zero energy Landau level crossing and re-entrant quantum Hall effect at Bc are robust signatures of a 2D TI state [6], while its other key feature, the helical edge states and quantum spin Hall effect, are easily obscured 5 by trivial edge conduction paths [27, 28] and by their extreme sensitivity to disorder [29-31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The zeroth Landau level crossing at Bc distinguishes the 2D TI from all other potential states of a topological thin film.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' For example, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1 shows calculated [15, 16] Landau fan diagrams of several other possible states [24], such as the Landau level spectrum of a hybridized 3D TI without band inversion (a film in the trivial thickness regime, for example) [Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1(b)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Here, because band inversion is absent, all electron-like Landau levels originate from the conduction band and all hole- like ones from the valence band.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Thus, the n = 0 Landau levels never cross and the ν = 0 quantum Hall persists and widens with increasing B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' If the film is in the nontrivial thickness regime but strong SIA is present, the crossing of the n = 0 Landau levels of the 2D TI becomes an anti-crossing [Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1(c)], and the system is also a trivial insulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Additionally, the ν = 0 quantum Hall plateau is present at all B but, in contrast to the preceding case, its width evolves nonmonotonically with increasing B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Finally, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1(d) shows that a ν = 0 quantum Hall state can also be observed for a non-hybridized (thick) 3D TI film when there is SIA and when the chemical potential lies between the n = 0 Landau levels of the top and bottom surfaces [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' In this case, the n = 0 Landau levels are non-dispersing, as has indeed been observed for several 3D TIs [33-36], and the width of the \uf06e = 0 quantum Hall plateau is constant in B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The presence of SIA can also be detected by characteristic crossings of Landau levels at higher energies, as seen in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1(c) and 1(d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' This latter feature results in complicated filling factor (ν) sequences in the quantum Hall effect as a function of carrier density and magnetic fields, as observed for thicker (~ 50 nm) Cd3As2 films that are in the 3D TI state [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' No such crossings occur without SIA [see Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1(a) and 1(b)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Combined, these distinguishing features, especially the dispersion of the n = 0 Landau levels and a reentrant ν = 0 quantum Hall plateau, provide experimental signatures of the four 6 different possible phases in thin films.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Clearly, it is essential to tune the Fermi level to charge neutrality and into the gap between the n = 0 Landau levels to distinguish these phases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' To study the electronic state of very thin Cd3As2 films, we performed low temperature magnetotransport measurements on top gated Hall bar structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Details about sample growth, device fabrication, transport measurements can be found in the Supplementary Information [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Figures 2 (a,b) show the longitudinal (σxx) and Hall conductivity (σxy), respectively, of a 20 nm (001)-oriented Cd3As2 film (sample 1), calculated by tensor inversion from the resistivities (shown in the Supplementary Information [24]), as a function of applied top gate bias, Vg, and B (for a plot of Vg versus carrier density, see Supplementary Information [24]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The labels in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 2(a), marking the σxx minima, denote the filling factors of the corresponding integer quantum Hall plateaus in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 2(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' We observe a sequence of Landau levels that produce well defined quantum Hall plateaus with both even and odd ν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Taken alone, these filling sequences would be consistent with any of the states in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' As discussed above, the low energy portion of the spectrum, around ν = 0, is crucial to distinguish them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' We thus first turn our attention to the region around charge neutrality (-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 > Vg > -2 V).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The first notable feature is the existence of a gap at zero B, as evidenced by the very low conductivity (σxx \uf040 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='06 e2/h) and insulating behavior (the temperature dependence is discussed below).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The insulating state at B = 0 shows that the top and bottom Dirac surface states are hybridized and, as a result, are gapped out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Moreover, the salient feature here are two Landau levels that originate at different Vg and, as B is increased, converge, meeting at approximately Bc = 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='4 T, before diverging for larger B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Concurrently, the ν = 0 quantum Hall plateau vanishes and then remerges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' This latter feature can be seen especially clearly in the σxy traces shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 2(b): here, \uf06e = 0 is present at low B (yellow-green traces), absent at intermediate B, and reentrant 7 at B > 10 T (dark blue traces) (for additional clarity, see also the Supplementary Material for a zoom in of the σxy traces around the ν = 0 plateau [24]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Accordingly, we identify the Landau levels crossing at Bc = 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='4 T as the n = 0 levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' In the regions where these two Landau levels are well separated, σxx is approximately zero and σxy plateaus at zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' By contrast, where σxx is finite (around Bc), σxy changes sign smoothly (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S5 [24]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The Landau level fan diagram is thus in perfect agreement with that of an idealized prototype 2D TI shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' It is not in agreement with either of the possible other states in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' We note that it is the significant dispersion (in B) of both n = 0 levels, and the absence of any other states nearby, that allows for the clear re-entrant ν = 0 quantum Hall plateau.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' By comparison, for HgTe quantum wells the p- type n = 0 quantum Hall plateau is nearly non-dispersive (indicative of a low Fermi velocity) and has a more complex dispersion [37, 38], causing the re-entrant quantum Hall effect to be rather complex (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=', ν = 0 → ν = 1 (-1) → ν = 0 [6]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Next, we discuss the Landau level spectrum away from the charge neutrality point, important for understanding the stability of the 2D TI state in this film.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' There are two Landau levels, which are marked with arrows in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 2(a) from a higher energy, bulk-originated subband.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' This is evident by the fact that they can be traced back to a different point on the Vg axis (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S6 where lines have been drawn for clarity to show the intercepts of the Landau levels with the Vg axis [24]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' When the chemical potential crosses either of these additional Landau levels, ν changes by 1 [see corresponding region in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 2(b)], indicating that they are non-degenerate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The existence of electronic subbands is consistent with expectations for a quantum confined thin film [12, 14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The second important feature of the Landau level spectrum is the fact, that, with the exception of the n = 0 Landau levels, Landau levels that originate from the electronic states near charge neutrality do not cross.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' As discussed above, the absence of any other crossings in the 8 Landau fans to which these n = 0 levels belong shows that there is no perceptible energy offset (SIA), although it is present for thicker films in similar structures [21, 22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' While it is perhaps not surprising that a 20 nm thin film with a small gap does not sustain a potential offset, the absence of strong SIA is an essential pre-requisite for the observed 2D TI state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Finally, in the region Vg < 2 V we observe a deep minimum in σxx and a plateau in σxy corresponding to ν = -1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' At more negative Vg we do not observe additional quantum Hall plateaus, possibly because the chemical potential resides in an electronic band of the buffer layer, causing spillover of carriers, or because of a high density of Landau levels from the more heavy, lower energy, valence band states (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=', similar to HgTe quantum wells).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' To demonstrate the sensitivity of the 2D TI state to small changes in film thickness, we performed additional transport measurements on 18 nm, 19 nm, and 22 nm Cd3As films prepared under nominally identical conditions as sample 1 (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The main result is a qualitatively identical Landau level spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The primary quantitative differences as a function of film thickness are: (i) with increasing film thickness Bc increases, which is similar to the thickness dependence observed in HgTe quantum wells [6], and (ii) the bulk-originated subband moves to higher Vg, consist with expected behavior for quantum confined thin films.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The similarity of the four Landau level spectra is in agreement with our previous observations, namely a qualitative consistency of higher energy Landau levels (n > 0) behavior with thickness [39].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' However, we now see clearly the opening of a hybridization gap, which is most dramatically displayed in the behavior of the zero energy Landau levels, whose behavior extremely sensitive to the thickness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' As the film thickness is further decreased, it is predicted [14] that changes in the subband ordering cause a transition from 2D TI to a 2D trivial insulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Figure 4 shows σxx and σxy versus Vg and B for a 14 nm film prepared under similar conditions as those discussed above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 9 Qualitatively, away from the charge neutrality point [Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 4(a) and 4(b)], the data is strikingly similar to the other samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Quantum Hall plateaus with both even and odd ν, indicating a degeneracy lifting, are present, but no crossings of higher energy Landau levels in the low energy fans are evident, demonstrating again the absence of SIA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Distinct from the other films, the Landau levels that originate from states of a higher energy subband are absent in the Vg range studied here, consistent with the expectation that the subband ordering changes as a function of film thickness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The important difference of this film is seen near charge neutrality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' For all samples, there is a clear insulating state at B = 0 accompanied by σxy = 0, indicating hybridization of the surface states;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' for the 14 nm film, however, the n = 0 Landau levels diverge and the ν = 0 quantum Hall plateau widens with increasing magnetic field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The Landau level spectrum of the 14 nm film is in remarkably good agreement with that of a 2D trivial insulator shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' We conclude from this that a 2D TI state in Cd3As2 can be achieved in 18-22 nm films while a small reduction in the film thickness (to 14 nm) causes a transition from a 2D topological insulator to a trivial one, consistent with the change in the subband ordering that is evident in the higher energy spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Small discrepancies between the thickness ranges for the different phases in the observations vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' predictions (ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [14]) can easily be explained by the fact the microscopic details of the heterostructures, which determine important parameters such as the Fermi velocity, were not considered in the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' In summary, we have observed the hallmarks of a nearly ideal 2D TI state in thin films of (001)-oriented Cd3As2, including insulating behavior at zero field and an energy gap between two n = 0 Landau levels that closes at Bc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The zeroth Landau level crossing is remarkably well resolved, aided by relatively high Fermi velocities of the electronic states that give rise to the zero-energy Landau levels, low disorder, and a good separation in energy from, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=', a high density of low 10 mobility valence band states found in other systems [37, 40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' These results establish that thin films of Cd3As2 constitute a new member of the very small, highly sought-after family of 2D TIs discovered to date.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Crucially, this 2D TI state is realized via a previously theoretically suggested route, namely by quantum confinement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' We also demonstrated that reducing the film thickness further induces a transition to a 2D trivial insulator, also consistent with theoretical predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The wide range of additional heterostructure parameters that can be tuned, such as film strain, makes the 2D TI phase in Cd3As2 films extraordinarily tunable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' This tunability could prove extremely useful in designing and testing future superconducting hybrid junctions for quantum information systems, which depend on finely tuned energy scales, and novel correlated states [41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' This study provides clear directions for the future work such as more detailed study of the thickness dependence of the electronic state of Cd3As2 films, particularly one that includes ultrathin (< 10 nm) films.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Finally, the results presented here demonstrate the possibility of realizing the quantum spin Hall effect in thin films of Cd3As2 and a next step should be investigations of the edge states physics, which requires smaller devices and low defect density mesa boundaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Acknowledgements The authors are grateful to Andrea Young and Xi Dai for very helpful discussions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The research was supported by the Air Force Office of Scientific Research (Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' FA9550-21-1-0180) and by the Office of Naval Research (Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' N00014-21-1-2474).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='L and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' also thank the Graduate Research Fellowship Program of the U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' National Science Foundation for support (Grant Nos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1650114 and 2139319).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' This work made use of the MRL Shared Experimental Facilities, which are supported by the MRSEC Program of the U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' National Science Foundation under Award No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' DMR 1720256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 11 Appendix A: Temperature dependent conductance at ν = 0 in the 2D TI state To further investigate the 2D TI, including the gapped state at low B, we performed temperature dependent two-point conductance (G) measurements, shown in the Supplementary Information [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Figure S13 [24] shows the temperature dependence of the minimum G of film 1 between the n = 0 Landau levels at B < 8 T [Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S13(a)], around Bc [Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S13(b)], and for B > 12 T [Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S13(c)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' For B < 8 T and B > 12 T, respectively, G shows an exponential-like increase with increasing temperature, consistent with a gapped state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' In these two regimes, the observed temperature dependence can be described 𝐺(𝑇) = 𝐺0exp[−(𝑇0/𝑇)𝑝], where 𝐺0 is a temperature independent prefactor, T0 is the characteristic hopping temperature and p is a model parameter that depends on the density of states at the Fermi level and takes the values 0 < p < 1 (see Supplementary Information [24]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A value of p = 1 corresponds to Arrhenius behavior and T0 = Ea/kB where Ea is the activation energy and kB is Boltzmann’s constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' We find that for B < 8 T [Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S13(a)], the temperature dependence of G can be described by p = 1/3, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=', 2D Mott variable range hopping (VRH), over some temperature range (see dashed lines).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' At intermediate B (7 T < B < 13 T), G shows dramatically different behavior [Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S13(b)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Within this range, but away from Bc, G is a nonmonotonic function of temperature with dG/dT > 0 at low temperatures that transitions to T- linear dependence with a negative slope at higher temperatures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' At Bc (B = 9 T and B = 10 T traces), G shows metallic behavior (it increases monotonically with decreasing temperature) and is approximately linear in temperature above 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' For B ≥ 13 T [Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S13(c)], the temperature dependence roughly follows Arrhenius behavior above 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 K, consistent with a clean, gapped state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Most importantly, the crossover from insulating to metallic temperature dependence occurs around B = Bc, consistent with a crossing of the n = 0 Landau levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Near Bc, the maximum value of σxx is ~ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='85 e2/h (see Supplementary Information Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S14 for σxx values between the n = 0 Landau 12 levels [24]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Interestingly, this σxx value is close to twice of the universal value of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 e2/h [42], consistent with the crossing of two Landau levels – by contrast, at the other quantum Hall transitions, σxx is close to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 e2/h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' (we note that G, presented in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S13, is smaller than σxx, shown, due to the contribution of contact/series resistances, including the ungated regions near the contacts).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The origin of the “strange metal” (T-linear) behavior at the crossing of hole-like and electron-like zeroth Landau levels and, more generally, the nature of this “Dirac-like” state, warrants further investigations, including theoretically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Appendix B: Detectability of the helical edge states ion the 2D TI state We briefly comment here on the detectability of helical edge states, expected to be present for B < Bc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' It is generally accepted in the 2D TI literature [9, 43, 44] that much smaller devices than those studied here, with carefully prepared edges [31], are needed to characterize the very fragile helical edge states, because the dimensions of the devices studied here exceed the phase coherence length (~1 µm).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' In devices larger than this dimension, it is generally found that σxx is smaller than 2e2/h at all temperatures [9, 43, 44].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Future experiments will address fabrication challenges for smaller devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Already in these large devices, however, the data hints at potentially rich physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' For one, the deepest minimum of σxx occurs at B = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='4 T [σxx(2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='4 T) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='008 e2/h] and not at B = 0 [σxx(0 T) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='06 e2/h], where the energy gap is largest (see Supplementary Information Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S14 [24]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' This is also reflected in the temperature dependence of G, because T0 is largest (125±8 K) at B = 2 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' One possible interpretation is that the decrease in σxx away from B = 0 reflects increased scattering of the edge states due to time-reversal symmetry breaking [45].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Competition between this enhanced scattering and the closing gap causes the minimum of σxx to occur at finite B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Secondly, VRH hopping does not completely describe the 13 data – at the lowest temperatures, we note that G saturates (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S13a), possibly indicative of a second transport path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 14 References [1] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kane, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Mele, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 95, 226801 (2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [2] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Bernevig, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zhang, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 96, 106802 (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [3] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Bernevig, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Hughes, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zhang, Science 314, 1757 (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [4] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fu, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kane, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 100, 096407 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [5] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Cao, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Chen, Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Quantum Technol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 2, 1900026 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [6] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Konig, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Wiedmann, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Brune, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Roth, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Buhmann, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Molenkamp, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Qi, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zhang, Science 318, 766 (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [7] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Roth, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Brune, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Buhmann, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Molenkamp, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Maciejko, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Qi, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zhang, Science 325, 294 (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [8] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Du, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Knez, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Sullivan, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Du, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 114, 096802 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [9] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Wu, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fatemi, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Gibson, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Watanabe, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Taniguchi, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Cava, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Jarillo- Herrero, Science 359, 76 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [10] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Tang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zhang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Wong, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Pedramrazi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Tsai, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Jia, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Moritz, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Claassen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Ryu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kahn, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Jiang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Yan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Hashimoto, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Moore, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Hwang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Hwang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Hussain, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Chen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Ugeda, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Xie, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Devereaux, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Crommie, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Mo, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shen, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 13, 683 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [11] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Linder, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Yokoyama, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Sudbo, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' B 80, 205401 (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [12] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='-Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Yao, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Niu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='-Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shen, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' B 81, 115407 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [13] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Liu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zhang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Yan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Qi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Frauenheim, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Dai, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fang, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zhang, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' B 81, 041307 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [14] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Weng, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Wu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Dai, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fang, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' B 88, 125427 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [15] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shen, New J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 12, 043048 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 15 [16] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shen, Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 5, 13277 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [17] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Borisenko, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Gibson, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Evtushinsky, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zabolotnyy, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Buchner, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Cava, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 113, 165109 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [18] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Neupane, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Xu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Sankar, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Alidoust, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Bian, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Liu, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Belopolski, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Chang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Jeng, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Bansil, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Chou, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Hasan, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 5, 3786 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [19] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Jiang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zhou, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Weng, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Prabhakaran, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='-K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Mo, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Peng, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Dudin, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kim, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Hoesch, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Dai, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Feng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Hussain, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Chen, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Mater.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 13, 677 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [20] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Ali, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Gibson, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Jeon, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zhou, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Yazdani, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Cava, Inorg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Chem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 53, 4062−4067 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [21] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kealhofer, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Galletti, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Schumann, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Suslov, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Stemmer, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' X 10, 011050 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [22] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kealhofer, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kealhofer, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Ohara, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Pardue, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Stemmer, Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 8, eabn4479 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [23] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Guo, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lygo, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Dai, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Stemmer, APL Mater.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 10, 091116 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [24] See Supplemental Material [link to be inserted by publisher] for the models used to calculate Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' details of the film growth, device fabrication and transport measurements;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' x-ray characterization of the samples;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' the measured resistance data from which the conductivity data shown in the main text was calculated;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' the gate voltage dependence of the sheet carrier density and Hall mobility extracted from the low field Hall effect;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Landau level data shown in the main text with guides to the eye;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' the temperature dependence of G at ν = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' and the magnetic field dependence of σxx between the n = 0 Landau levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The Supplementary Information also contains Refs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [25,26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 16 [25] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shoron, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Goyal, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Guo, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kealhofer, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Schumann, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Stemmer, Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Electron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Mater.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 6, 2000676 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [26] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Virtanen, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Gommers, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Oliphant, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Haberland, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Reddy, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Cournapeau, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Burovski, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Peterson, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Weckesser, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Bright, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' van der Walt, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Brett, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Wilson, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Millman, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Mayorov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Nelson, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Jones, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kern, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Larson, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Carey, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Polat, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Feng, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Moore, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' VanderPlas, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Laxalde, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Perktold, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Cimrman, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Henriksen, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Quintero, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Harris, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Archibald, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Ribeiro, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Pedregosa, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' van Mulbregt, and SciPy Contributors, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Methods 17, 261 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [27] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Ma, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Calvo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Wang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lian, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Muhlbauer, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Brune, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Cui, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lai, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kundhikanjana, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Yang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Baenninger, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Konig, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Ames, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Buhmann, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Leubner, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Molenkamp, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zhang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Goldhaber-Gordon, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kelly, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shen, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 6, 7252 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [28] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Nichele, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Suominen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kjaergaard, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Marcus, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Sajadi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Folk, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Qu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Beukman, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' de Vries, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' van Veen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Nadj-Perge, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kouwenhoven, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Nguyen, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kiselev, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Yi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Sokolich, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Manfra, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Spanton, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Moler, New J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 18, 083005 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [29] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Maciejko, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Qi, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zhang, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' B 82, 155310 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [30] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Tkachov, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Hankiewicz, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 104, 166803 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [31] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Bendias, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shamim, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Herrmann, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Budewitz, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shekhar, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Leubner, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kleinlein, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Bocquillon, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Buhmann, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Molenkamp, Nano Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 18, 4831 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [32] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Morimoto, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Furusaki, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Nagaosa, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 114, 146803 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [33] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Yoshimi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Tsukazaki, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kozuka, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Falson, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Takahashi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Checkelsky, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Nagaosa, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kawasaki, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Tokura, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 6, 6627 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 17 [34] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Xu, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Miotkowski, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Chen, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 7, 11434 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [35] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Chong, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Han, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Sparks, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Deshpande, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 123, 036804 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [36] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Ziegler, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kozlov, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Mikhailov, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Dvoretsky, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Weiss, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 2, 033003 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [37] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shamim, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shekhar, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Beugeling, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Böttcher, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Budewitz, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Mayer, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lunczer, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Hankiewicz, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Buhmann, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Molenkamp, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 13, 2682 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [38] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Beugeling, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' B 104, 115428 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [39] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kealhofer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Goyal, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Pardue, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Stemmer, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' B 104, 035435 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [40] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kadykov, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Krishtopenko, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Jouault, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Desrat, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Knap, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Ruffenach, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Consejo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Torres, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Morozov, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Mikhailov, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Dvoretskii, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Teppe, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 120, 086401 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [41] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zeng, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Xue, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' MacDonald, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' B 105, 125102 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [42] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Murzin, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Jansen, Physica E 43, 1576 (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [43] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lunczer, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Leubner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Endres, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Müller, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Brüne, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Buhmann, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Molenkamp, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 123, 047701 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [44] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Gusev, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kvon, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Olshanetsky, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Mikhailov, Solid State Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 302, 113701 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [45] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Delplace, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Li, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Buttiker, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 109, 246803 (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 18 Figure Captions Figure 1: Characteristic Landau level spectra of different topological phases in thin films.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' (a) 2D TI, (b) 2D trivial insulator, (c) hybridized 3D TI with SIA, and (d) 3D TI with SIA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The labels denote quantum Hall filling factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' In (a), (b) and (c) electron-like (hole-like) levels are show in orange (blue) and in (d) Landau levels from the higher energy surface are shown in purple and those from the other surface are shown in green.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' In all, the n = 0 Landau levels are drawn with heavier line weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' All four phases can produce a ν = 0 quantum Hall state (a 3D TI without SIA has degenerate n = 0 Landau levels and there is no ν = 0 quantum Hall state).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' See Supplementary Materials for details of the calculations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Figure 2: Landau levels and quantum Hall effect of sample 1 (20 nm film).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Magnetic field (B) and gate voltage (Vg) dependence of the (a) longitudinal (σxx) and (b) Hall (σxy) conductivities of sample 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The labels in (a) denote the corresponding quantum Hall filling factors in (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The black arrows mark additional Landau levels from a higher energy subband.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Figure 3: Magnetic field (B) and gate voltage (Vg) dependence of the longitudinal (σxx) conductivity of (a) 18 nm, (b) 19 nm, and (c) 22 nm films.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Figure 4: Landau levels and quantum Hall effect of a 14 nm film.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shown are the B and Vg dependence of σxx (a) and σxy (b) of sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Labels in (a) denote quantum Hall filling factors obtained from (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 19 Figures with captions Figure 1: Characteristic Landau level spectra of different topological phases in thin films.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' (a) 2D TI, (b) 2D trivial insulator, (c) hybridized 3D TI with SIA, and (d) 3D TI with SIA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The labels denote quantum Hall filling factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' In (a), (b) and (c) electron-like (hole-like) levels are show in orange (blue) and in (d) Landau levels from the higher energy surface are shown in purple and those from the other surface are shown in green.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' In all, the n = 0 Landau levels are drawn with heavier line weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' All four phases can produce a ν = 0 quantum Hall state (a 3D TI without SIA has degenerate n = 0 Landau levels and there is no ν = 0 quantum Hall state).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' See Supplementary Information for details of the calculations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 20 Figure 2: Landau levels and quantum Hall effect of sample 1 (20 nm film).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Magnetic field (B) and gate voltage (Vg) dependence of the (a) longitudinal (σxx) and (b) Hall (σxy) conductivities of sample 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The labels in (a) denote the corresponding quantum Hall filling factors in (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The black arrows mark additional Landau levels from a higher energy subband.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Figure 3: Magnetic field (B) and gate voltage (Vg) dependence of the longitudinal (σxx) conductivity of (a) 18 nm, (b) 19 nm, and (c) 22 nm films.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' −4 −2 0 2 4 Vg (V) 0 2 4 6 8 10 12 14 B (T) 0 5 σxx (e2/h) (a) − 4 − 2 0 2 4 V g (V ) 0 2 4 6 8 10 12 14 B (T ) 0 5 σ xx (e2/h) (b) −4 −2 0 2 4 Vg (V) 0 2 4 6 8 10 12 14 B (T) 0 5 σxx (e2/h) (c) ν = -1 0 2 3 8 4 0 1 ν = -1 2 3 8 5 0 1 0 4 ν = -1 2 3 6 1 0 3 4 5 21 Figure 4: Landau levels and quantum Hall effect of a 14 nm film.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shown are the B and Vg dependence of σxx (a) and σxy (b) of sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Labels in (a) denote quantum Hall filling factors obtained from (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' g σ xx (e2/h) ν = -1 2 4 0 1 (a) g σ xy (e2/h) (b) Supplementary Information Two-dimensional topological insulator state in cadmium arsenide thin films Alexander C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lygo, Binghao Guo, Arman Rashidi, Victor Huang, Pablo Cuadros-Romero, and Susanne Stemmer Materials Department, University of California, Santa Barbara, California 93106-5050, USA Contents Model of a hybridized 3D TI .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='.2 Sample growth .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='3 Device fabrication .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='3 Transport measurements .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='4 X-ray characterization .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='.4 Resistance data .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='.7 Gate voltage dependence of sheet carrier density and Hall mobility .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='9 Field dependence of 𝝂 = 𝟎 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='.9 𝝈𝒙𝒙 of the 20 nm sample and the 14 nm sample with guides to the eye .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='.9 Temperature dependence of G at ν = 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='10 Magnetic field dependence of σxx between the n = 0 Landau levels .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='.11 References .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='.11 Model of a hybridized 3D TI A two-dimensional topological insulator state can be realized in thin films of 3D topological insulators when the film thickness, L, is reduced such that there is a spatial overlap of the surface state wavefunctions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Starting from the bulk Hamiltonian of a 3D TI, the effective Hamiltonian in the thin film limit was derived in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [1], the result being: 𝐻!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='"" = #ℎ#(𝑘) 𝑉$𝐼%×% 𝑉$𝐼%×% ℎ\'(𝑘)*.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' (S1) Here, ℎ±(𝑘) = 𝐸$ − 𝐷𝑘% − ℏ𝑣)0𝑘*𝜎+ + 𝑘+𝜎*3 ± 5 , % − 𝑀𝑘%7 𝜎-, where 𝜎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' are Pauli matrices in the basis of spin-up and spin-down, 𝑘% = 𝑘*% + 𝑘+%, 𝑣) is the Fermi velocity, and 𝐸$, 𝐷, Δ, and 𝑀 are material specific band parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' In the presence of structural inversion asymmetry (SIA), 2𝑉$ is the potential difference between the top and bottom surface of the thin film.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' If 𝑉$ = 0 (no SIA), 𝐻!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='"" is simply the Benevig- Hughes-Zhang Hamiltonian [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The condition Δ𝑀 > 0 describes a 2D TI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The term 5 , % − 𝑀𝑘%7 is a mass term due to hybridization and the parameters Δ and 𝑀 depend on L and approach zero in the large L limit, returning the system to a 3D TI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Landau level energies of a hybridized 3D TI were derived in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Here we briefly summarize their results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' With the magnetic field applied normal to the plane of the thin film, the Landau level energies are obtained from 𝐻!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='"" by first making the substitution, 𝑘<⃗ → −𝑖∇ + 𝑒𝐴⃗/ℏ, where e is the elementary change and ℏ the reduced Plank’s constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Under the Landau gauge 𝐴⃗ = 𝐵𝑦𝑥G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Using ladder operators 𝑎 = /!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' √% 0𝑘* − 𝜕+ − 𝑦/𝑙1 %3 and 𝑎2 = /!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' √% 0𝑘* + 𝜕+ − 𝑦/𝑙1 %3, where 𝑙1 = Kℏ/𝑒𝐵 is the magnetic length and defining 𝜔 = 2𝑀/𝑙3 %, the effective Hamiltonian in a magnetic field reads: 𝐻!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' ""(𝐵) = ⎝ ⎜ ⎜⎜ ⎜ ⎛ # $ − 𝜔 /𝑎%𝑎 + & $2 𝑖 √$ℏ)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' " 𝑎 𝑉+ 0 −𝑖 √$ℏ)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' " 𝑎% − # $ + 𝜔 /𝑎%𝑎 + & $2 0 𝑉+ 𝑉+ 0 − # $ + 𝜔 /𝑎%𝑎 + & $2 𝑖 √$ℏ)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' " 𝑎 0 𝑉+ −𝑖 √$ℏ)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' " 𝑎% # $ − 𝜔 /𝑎%𝑎 + & $2⎠ ⎟ ⎟⎟ ⎟ ⎞ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' (S2) The term 𝐸$ − 𝐷𝑘% is dropped from ℎ±(𝑘) as it does not significantly influence the Landau level energies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Taking the eigenstates, |𝑛⟩, of the quantum harmonic oscillator as a basis, the trial solutions for n = 0 are: 𝜓+ = : 0 𝜙$,+⟨𝑦|0⟩ 0⟩ 𝜙-,+⟨𝑦|0⟩ @, (S3) and for n > 0 they are: 𝜓.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' = ⎝ ⎜ ⎛ 𝜙&,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='⟨𝑦|𝑛 − 1⟩ 𝜙$,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='⟨𝑦|𝑛⟩ 𝜙/,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='⟨𝑦|𝑛 − 1⟩ 𝜙-,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='⟨𝑦|𝑛⟩ ⎠ ⎟ ⎞, (S4) where 𝜙.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=',5 = 𝐶.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=',5/K𝐿*𝑒.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='6" with 𝐶.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=',5 being constants and ⟨𝑦|𝑛⟩ = 7 85!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='%#/!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content="√: exp 5− (+'+$)% %/!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=" % 7 ℋ5 5 (+'+$) /!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 7, (S5) where ℋ5 are the Hermite polynomials and 𝑦$ is the guiding center of the wave packet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Using Eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S2, S3, and S4, the energies of the Landau levels for n = 0 were determined to be: 𝐸+ ± = ±E/− # $ + 1 $2 $ + 𝑉+ $, (S6) and for n > 0 the energies are: 𝐸.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=',2 ± = ±FG𝜀.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' + 𝑠J/ 1 $2 $ + 𝑉+ $ /√$.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='ℏ)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' " 2 $ + 𝑉+ $ K .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='13# $ 4% L $ M $ (S7) where 𝜀5 = Z𝑛 5√%ℏ>& /!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 7 % + 5 , % − 𝜔7 % .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' For the Landau level spectra show in the Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1 of the main text we have used 𝑣) = 5 × 10?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' m/s (a realistic value for Cd3As2 thin films [4]) throughout.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' For the spectrum of a 2D TI (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1a), Δ=-20 meV, 𝑀=-800 meVnm2, and 𝑉$ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' For the spectrum of a 2D trivial insulator without SIA (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1b), Δ=-20 meV, 𝑀 = 800 meVnm2, and 𝑉$ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' For the spectrum of a hybridized 3D TI with SIA (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1c) we used Δ = −20 meV, 𝑀 = −900 meVnm2 and 𝑉$ = 10 meV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' For the spectrum of a 3D TI with SIA (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 1d) we used Δ = 0, 𝑀 = 0, and 𝑉$ = 10 meV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' These values are chosen for illustrative purposes only, as realistic model parameters for Cd3As2 thin films on a substrate are not known.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Sample growth The Cd3As2 thin films discussed in the main text were grown by molecular beam epitaxy (MBE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Samples were grown on epi-ready undoped (001) GaSb substrate offcut 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5° or 3° towards (111)B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The substrates were annealed for 12 hours at 150 °C in a high-vacuum load lock chamber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The substrate’s native oxide was removed in an ultra-high vacuum (UHV) preparation chamber using atomic hydrogen etching for 1 hour at a substrate temperature of 500 °C, as measured by the heater thermocouple or by thermal desorption under Sb2 flux in the UHV growth chamber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' In the following, all temperatures were measured using an optical pyrometer, unless otherwise stated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The buffer layer growth consisted of: 100 nm of GaSb grown at approximately 490 °C to smooth the surface followed by 500 nm or 1 μm of Al0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='45In0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='55Sb grown between 390 °C and 410 °C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Afterwards, the substrate was cooled to a substrate heater thermocouple temperature between 120 °C 170 °C to grow Cd3As2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' To protect the Cd3As2 film surface, 3 nm of GaSb was grown as a capping layer at the Cd3As2 growth temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Device fabrication Eight-arm Hall bars were patterned using standard photolithographic techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Mesas were isolated by argon ion beam dry etching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Sample 1 Hall bars were contacted by sputtered Au/Ti (200/50 nm) leads or electron-beam evaporated Au/Pt/Ti (200/20/20 nm) leads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' For top gates, ~25 nm AlOx, grown by atomic layer deposition (ALD) using a 120 °C process, served as the dielectric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The dielectric thickness was determined using ellipsometry performed on a Si spectator chip from the same processing run.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Immediately prior to AlOx deposition, the entire chip was exposed to a brief low energy oxygen plasma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The gate electrodes were deposited by thermal evaporation of Au/Ni (200/50 nm).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The gate electrode coved the top and sides of the channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S1 shows an optical micrograph of Hall bar fabricated from the 20 nm sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The labels denote the contact configuration used for 4-probe measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Optical micrograph Hall bar fabricated from sample 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The Cd3As2 appears reddish- pink.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The labels denote the contact configuration used for 4-probe measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Transport measurements All transport measurements were performed in a Quantum Design PPMS with a base temperature T = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='8 K, in magnetic fields B up to 14 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Temperature dependent measurements of the 20 nm were performed from 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='8 K to 40 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Standard lock-in detection techniques were used for all measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Except for the 14 nm sample, 4-probe measurements were performed by applying a 10 mV AC (17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='77 Hz) voltage to a ~10 MW resistor in series with the devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The resulting current (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='15-1 nA) was measured using a Stanford Research Systems (SRS) SR830 lock-in amplifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Differential voltages (Vxx and Vxy) were measured using SRS SR560 voltage preamplifiers with a gain of either 100 or 500 and the signal was band-pass filtered with cutoff frequencies at 10 Hz and 30 Hz and a 6 dB/oct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' rolloff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The outputs of the preamplifiers were recorded using SRS SR830 lock-in amplifiers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 2-probe conductance measurements were performed by applying an AC (17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='77 Hz) voltage (≤ 200 mV) across adjacent contacts along the same edge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The bias voltage was measured using a SR560 with a gain of 100 and the same filter settings as described above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The current through the device was measured using an Ithaco model 1211 transimpedance amplifier with a gain of 10-7 A/V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Four-probe measurements of the 14 nm sample were performed similarly expect that a 100 kW resistor was placed in parallel with the device to minimize Joule heating while measuring the highly resistive state around charge neutrality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Due to the highly insulating state around charge neutrality, measurements of sample 2 where limited to magnetic fields up to 9 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' For all samples, gate voltage, Vg, was applied relative to circuit ground and was stepped in 2 mV or 5 mV increments using a Keithley 2400 source meter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The magnetic field was stepped in 200 mT (150 mT for the 14 nm sample) increments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' For all data sets, readings taken at the same Vg value were averaged before plotting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Longitudinal (sxx) and transverse conductivity (sxy) were obtain by tensor inversion as: 𝜎** = @"" @"" % #@"\' % (S8) 𝜎*+ = \'@"\' @"" % #@"\' % (S9) X-ray characterization X-ray diffraction (XRD) and reflection (XRR) measurements were performed in triple-axis geometry on a Rigaku SmartLab diffractometer equipped with a HyPix3000 detector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The incident optics included a 1 mm physical slit and a Ge (220) 2-bounce monochromator to select Cu Kα.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Figure S2(a) shows x-ray reflectivity measurements for all samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The sample thickness were determined by fitting the reflectivity data with a model that included all layers of the heterostructures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Figure S2(b) shows 2θ/ω scans performed after alignment to the GaSb substrate 004 reflection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' At room temperature, because of slight difference in growth conditions of the Al0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='45In0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='55Sb buffer layer, the measured Cd3As2 0016 reflection for the 14 nm and 22 nm Vxx Vxy I+ I- 50μmsamples is shifted to lower angle (concurrently, the Al0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='45In0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='55Sb reflection is shifted to a higher angle) than that of the 18 nm, 19 nm, and 20 nm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Figure S3 and S4 show a reciprocal space map (RSMs) taken around the GaSb 224 reflection for 20 nm and 14 nm samples, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Tables SI and SII report the lattice constants, obtained from the RSMs, of the Cd3As2 film and buffer layers for the 20 nm sample and 14 nm sample, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Peak positions were determined by finding the maximum intensity of each reflection after Gaussian filtering with a standard deviation parameter of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Uncertainties on the lattice constants were estimated by propagating the uncertainties of the peak position in the standard way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Uncertainties of the peak position were taken to equal the step size of the measurement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' At room temperature, the 20 nm sample and 14 nm sample are under small compressive strain (-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='54% and -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='75%, respectively) relative to bulk crystals [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' X-ray reflectivity data (a) and out-of-plane 2θ/ω x-ray diffraction patterns (b) of the samples discussed in the manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' log(relative intensity) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 2θ (°) 14 nm 18 nm 19 nm 20 nm 22 nm log(relative intensity) 62 61 60 59 58 57 56 2θ (°) Cd3As2 (0016) GaSb (004) Al0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='45In0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='55Sb (004) (a) (b) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Reciprocal space map taken around the GaSb 224 reflection of 20 nm sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The dashed white line shows the cubic condition (a = c or q[110] = √2 q[100]) and the solid white line marks the q[110] position of the Al0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='45In0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='55Sb buffer layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Reciprocal space map taken around the GaSb (224) reflection for the 14 nm sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The dashed white line shows the cubic condition (a = c or q[110] = √2 q[100]) and the solid white line marks the q[110] position of the Al0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='45In0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='55Sb buffer layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Al0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='45In0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='55Sb 224 GaSb 224 Cd3As2 44 16 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='15 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='10 Log (Intensity) (A-1) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='05 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='00 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='95 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='90 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='75 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='80 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='85 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='90 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='95 (A-1)4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='15 GaSb 224 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='10- (A-1) Log (Intensity) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='05 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='00 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='95 Cd3As2 44 16 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='90 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='75 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='80 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='85 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='90 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='95 (A-1) //[1101Table SI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lattice constants of Cd3As2 and Al0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='45In0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='55Sb obtained from the RSM in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Layer a (Å) c (Å) Cd3As2 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='565±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='001 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='575±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='007 Al0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='45In0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='55Sb 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='280±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='001 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='319±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='002 Table SII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lattice constants of Cd3As2 and Al0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='45In0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='55Sb obtained from the RSM in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Layer a (Å) c (Å) Cd3As2 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='538±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='001 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='62±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='01 Al0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='45In0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='55Sb 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='265±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='001 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='309±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='003 Resistance data Figures S5, S6, S7, S8, and S9 show the longitudinal resistance (Rxx) and Hall resistances (Rxy) used to compute longitudinal (sxx) and Hall (sxy) conductivities for the samples, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Longitudinal resistance, Rxx, (a,b) and Hall resistance, Rxy (c) of the 20 nm sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Traces in (b) are offset by 1 kΩ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Longitudinal resistance, Rxx, (a,b) and Hall resistance, Rxy (c) of the 18 nm sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Traces in (b) are offset by 1 kΩ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' (a) (b) −4 −2 0 2 4 Vg (V) −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 Rxy (h/e2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='4 14 B (T) (c) 9g (9) −4 −2 0 2 4 B(T) 3 6 912 5xx (0Ω) 0 1 2 3 4 5 (a) −4 −2 0 2 4 Vg (V) 0 20 40 60 80 100 Rxx (kΩ) 0 14 B (T) (b) −4 −2 0 2 4 Vg (V) −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 Rxy (h/e2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='2 14 B (T) (c) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Longitudinal resistance, Rxx, (a,b) and Hall resistance, Rxy (c) of the 19 nm sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Traces in (b) are offset by 1 kΩ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Longitudinal resistance, Rxx, (a,b) and Hall resistance, Rxy (c) of the 22 nm sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Traces in (b) are offset by 1 kΩ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Longitudinal resistance, Rxx, (a,b) and Hall resistance, Rxy (c) of the 14 nm sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 9g (9) −4 −2 0 2 4 B(T) 3 6 912 5xx (0Ω) 0 1 2 3 4 5 (a) −4 −2 0 2 4 Vg (V) 0 20 40 60 80 100 Rxx (kΩ) 0 14 B (T) (b) −4 −2 0 2 4 Vg (V) −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 Rxy (h/e2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='2 14 B (T) (c) 9g (9) −4 −2 0 2 4 B(T) 3 6 912 5xx (0Ω) 0 1 2 3 4 5 (a) −4 −2 0 2 4 Vg (V) 0 20 40 60 80 100 Rxx (kΩ) 0 14 B (T) (b) −4 −2 0 2 4 Vg (V) −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 Rxy (h/e2) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='2 14 B (T) (c) (a) (b) 100 B (T) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='15 9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 (uy) 60 (h/e2) Rxx 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 20 B (T) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='15 9 0 2 0 2 4 4 2 0 2 4 Vg (V) Vg (V)Gate voltage dependence of sheet carrier density and Hall mobility Figure S10 shows the gate voltage, Vg, dependence of the sheet carrier density, n2D, and the Hall mobility of the 20 nm sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Capacitance density of the device was determined from the slope of the n2D versus Vg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' There is a distinct change in slope at Vg = 1 V, the approximate voltage where an additional subband appears [see main text as well as Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S12(a)], reflecting the influence of the quantum capacitance with the change in density of states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Below Vg = 1 V the capacitance density is 62 nF/cm2 and above Vg = 1 V the capacitance density is 131 nF/cm2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Assuming a dielectric constant of the 25 nm thick Al2O3 gate dielectric of 9, the geometrical capacitance is 320 nF/cm2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The discrepancy is due to the influence of the quantum capacitance in series with the gate dielectric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Sheet carrier density, n2D, (a) and Hall mobility (b) of the 20 nm sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The red and blue lines in a are linear fits used to determine the capacitance density.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Field dependence of 𝝂 = 𝟎 Figure S11 shows 𝜎*+ of sample 1 at 4 T, 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='6 T, and 14 T around 𝜈 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' For 4 T and 14 T there is a clear plateau at 𝜎*+ = 0, whereas at 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='6 T 𝜎*+ changes sign smoothly between the 𝜈 = 1 and 𝜈 = −1 plateaus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 𝜎56 of sample 1 at 4 T, 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='6 T, and 14 T around 𝜈 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 𝝈𝒙𝒙 of the 20 nm sample and the 14 nm sample with guides to the eye Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S12 shows 𝜎** for sample 1 (a) and sample 2 (b) with lines overlayed to highlight the Landau levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The lines serve as guides of the positions of the Landau levels only (they are not fits), as they neglect the −1 0 1 2 3 4 Vg (V) 0 1 2 3 4 n2D (1012 cm−2) (a) −1 0 1 2 3 4 Vg (V) 5000 10000 15000 20000 0obility (cm2/VV) (b) 62 nF/cm2 131 nF/cm2 −3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='2 −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='4 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='6 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='8 9g (9) −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 σxy (e2/h) 4 T 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='6 T 14 T √𝐵 dispersion of the 𝑛 > 0 Landau levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The data show in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S6 is the same as that in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 2(a) and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 4(a) of the main text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 𝜎55 of sample 1 (a) and sample 2 (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The pink dashed lines are guides to eye for the Landau levels originating from the states nearest to charge neutrality and the green dashed lines in (a) are guides to the eye for the two Landau levels originating from a higher energy subband.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Temperature dependence of G at ν = 0 Figure S13 shows temperature dependent two-point conductance of the 20 nm sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fitting of the temperature dependence of G, shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 3 in the main text, was done using the non-linear least squares method as implemented in the SciPy Python package [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Uncertainties on the fitted parameters are taken to be equal to the square root of the diagonal elements of the covariance matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Tables SII and SIII show the results for the fit parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Temperature dependent conductance (G) of the 20 sampleat different magnetic fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The dashed lines in (a) and (c) are fits to the 2D Mott variable range hopping equation and Arrhenius equation, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The dashed lines in (b) are guides to the eye highlighting the range of T-linear conductance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' −4 −2 0 2 4 9g (9) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='5 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 B (7) 0 8 σxx (e2/h) ν = -1 2 4 0 1 (b) −4 −2 0 2 4 Vg (V) 0 2 4 6 8 10 12 14 B (T) 0 5 σxx (e2/h) ν = -1 4 0 3 2 5 8 6 0 1 (a) 0 10 20 30 40 7emperture (.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=') 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='25 G (e2/h) 0 7 1 7 2 7 3 7 4 7 5 7 6 7 7 7 (a) 0 10 20 30 40 Temperature (.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=') 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='25 G (e2/h) 8 T 9 T 10 T 11 T 12 T (b) 0 10 20 30 40 Temperture (.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=') 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='16 G (e2/h) 13 T 14 T (c) Table SIII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Parameters obtained from fitting the minimum 2-point conductance (G) at n = 0 versus temperature to the 2D Mott variable range hopping model, 𝐺(𝑇) = 𝐺+expW−(𝑇+/𝑇)&//Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The experimental data and fits are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S13(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Magnetic Field (T) G0 (e2/h) T0 (K) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='70±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='04 58±7 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='75±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='03 82±6 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='81±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='03 125±8 3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='74±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='03 112±7 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='64±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='02 78±5 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='50±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='02 40±3 6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='74±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='01 11±1 7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='264±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='003 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='1 Table SIV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Parameters obtained from fitting the minimum 2-point conductance at n = 0 versus temperature to the Arrhenius equation, 𝐺(𝑇) = 𝐺+exp[−𝐸8/𝑇].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' The experimental data and fits are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S13(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Magnetic Field (T) G0 (e2/h) Ea (K) 13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='158±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='002 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='2 14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='143±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='005 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='3±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='8 Magnetic field dependence of σxx between the n = 0 Landau levels Figure S14 shows the conductivity between the n = 0 Landau levels versus magnetic field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' For B >11 T and B < 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='8 T, the n = 0 are well separated and there is a clear minimum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Around the crossing of the n = 0 Landau levels (11 T ³ B ³ 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='8 T) there is no gap between the n = 0 Landau levels and the maximum conductivity is plotted instead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' S14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Conductivity of sample 1 between the n = 0 Landau levels versus magnetic field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' References [1] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='-Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Yao, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Niu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='-Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shen, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' B 81, 115407 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [2] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Bernevig, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Hughes, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zhang, Science 314, 1757 (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [3] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Lu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shen, Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 5, 13277 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [4] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Shoron, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Goyal, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Guo, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kealhofer, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Schumann, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Stemmer, Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Electron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Mater.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 6, 2000676 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 0 3 6 9 12 15 B (T) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content='0 σxx (e2/h) σxx minima σxx maxima [5] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Ali, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Gibson, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Jeon, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Zhou, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Yazdani, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Cava, Inorg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Chem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' 53, 4062−4067 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' [6] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Virtanen, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Gommers, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Oliphant, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Haberland, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Reddy, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Cournapeau, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Burovski, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Peterson, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Weckesser, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Bright, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' van der Walt, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Brett, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Wilson, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Millman, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Mayorov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Nelson, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Jones, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Kern, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Larson, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Carey, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Polat, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Feng, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Moore, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' VanderPlas, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Laxalde, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Perktold, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Cimrman, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Henriksen, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Quintero, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Harris, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Archibald, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Ribeiro, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Pedregosa, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' van Mulbregt, and SciPy Contributors, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} +page_content=' Methods 17, 261 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/f9E0T4oBgHgl3EQf6AK-/content/2301.02759v1.pdf'} diff --git a/g9AzT4oBgHgl3EQf4P7n/content/2301.01843v1.pdf b/g9AzT4oBgHgl3EQf4P7n/content/2301.01843v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8c354b9f7a5a118e51c0a05dfd1218081a5969f4 --- /dev/null +++ b/g9AzT4oBgHgl3EQf4P7n/content/2301.01843v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:037d93b7182cb7af186b0edcd7dbcd20101d9cfe3e63bfe894e0b7202d4e1e1a +size 3168199 diff --git a/hNAyT4oBgHgl3EQf-voB/content/2301.00895v1.pdf b/hNAyT4oBgHgl3EQf-voB/content/2301.00895v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..340161907878c94f2a2493b42da8342ba2db2d97 --- /dev/null +++ b/hNAyT4oBgHgl3EQf-voB/content/2301.00895v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68a57a234823b88bb2f7f130bb26002eee01cf2826b76f2406b74d989472bb6a +size 554192 diff --git a/hNAyT4oBgHgl3EQf-voB/vector_store/index.pkl b/hNAyT4oBgHgl3EQf-voB/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..8dab09fd71fb10da84f670bc7a7a342fad1b18ff --- /dev/null +++ b/hNAyT4oBgHgl3EQf-voB/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cceb86e914d1ef5632d2f3538ef370dbb140ba6b2a8bb69c14e3e5b4301c4f0a +size 338949 diff --git a/itAyT4oBgHgl3EQfX_cd/content/tmp_files/2301.00192v1.pdf.txt b/itAyT4oBgHgl3EQfX_cd/content/tmp_files/2301.00192v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..abfe6db4b15728c0a16a2b2094f293ded21fb912 --- /dev/null +++ b/itAyT4oBgHgl3EQfX_cd/content/tmp_files/2301.00192v1.pdf.txt @@ -0,0 +1,1359 @@ +Under consideration for publication in J. Fluid Mech. +1 +Banner appropriate to article type will appear here in typeset article +Slumping regime in lock-release turbidity currents +Cyril Gadal1†, M. Mercier1 and L. Lacaze1 +1Institut de Mécanique des Fluides de Toulouse (IMFT), Université de Toulouse, CNRS, Toulouse, France +(Received xx; revised xx; accepted xx) +Most gravitational currents occur on sloping topographies, often in the presence of particles +that can settle during the current propagation. Yet, an exhaustive exploration of associated +parameters in experimental devices is still lacking. Here, we present an extensive experimental +investigation on the slumping regime of turbidity (particle-laden) currents in two lock- +release (dam-break) systems with inclined bottoms. We identify 3 regimes controlled by the +ratio between settling and current inertia. (i) For negligible settling, the turbidity current +morphodynamics correspond to those of saline homogeneous gravity currents, in terms +of velocity, slumping (constant-velocity) regime duration and current morphology. (ii) For +intermediate settling, the slumping regime duration decreases to become fully controlled by +a particle settling characteristic time. (iii) When settling overcomes the current initial inertia, +the slumping (constant-velocity) regime is not detected anymore. In the first two regimes, the +current velocity increases with the bottom slope, of about 35 % between 0◦ and 15◦. Finally, +our experiments show that the current propagates during the slumping regime with the same +shape in the frame of the moving front. Strikingly, the current head (first 10 centimeters +behind the nose) is found to be independent of all experimental parameters covered in the +present study. We also quantify water entrainment coefficients 𝐸, and compare them with +previous literature, hence finding 𝐸 ∝ R𝑒. +Key words: gravity currents, particle/fluid flow +1. Introduction +Turbidity currents are gravity driven flows induced by the presence of suspended particles, +in addition to other processes that may affect the density, such as temperature, salinity or +humidity. They occur ubiquitously in nature, from submarine turbidites to powder snow +avalanches and volcanic pyroclastic flows, and are almost always sources of potential natural +hazards (Dobran et al. 1994; Stethem et al. 2003; Carter et al. 2014; Clare et al. 2020, e.g.). +These currents have been studied extensively along with purely density–driven gravity +currents during almost a century, by means of experiments (e.g Simpson & Britter 1980; +Rastello et al. 2002; Dai 2014; Lippert & Woods 2020), theoretical analyses (e.g Benjamin +1968; Huppert 1998; Hogg & Woods 2001; Ungarish 2009) and numerical simulations +(e.g Meiburg et al. 2005; Cantero et al. 2007, 2012; Ottolenghi et al. 2016). Among these +† Email address for correspondence: cyril.gadal@imft.fr +Abstract must not spill onto p.2 +arXiv:2301.00192v1 [physics.flu-dyn] 31 Dec 2022 + +2 +studies, a major source of interest has been to predict the front velocity 𝑢𝑐 of the current. +Dimensionally, a current of height ℎ and density difference Δ𝜌 with respect to the ambient +density 𝜌 would have a front velocity scaling as +𝑢𝑐 ∝ +√︁ +𝑔′ℎ, +(1.1) +where 𝑔′ = 𝑔(Δ𝜌/𝜌). Many works have been devoted to capture the exact value of the +proportionality factor. The pioneer work of Von Kármán (1940) leads to +√ +2 in the case +of steady unbounded flows, then extended to account for finite flow depth (Benjamin 1968; +Rottman & Simpson 1983; Ungarish & Zemach 2005), energy conservation/dissipation (Shin +et al. 2004; Borden & Meiburg 2013) or non-boussinesq density difference (Ungarish 2007, +2011; Konopliv et al. 2016). +Gravity currents can be generated by a constant source of buoyancy (Britter & Linden 1980; +Baines 2001; Cenedese & Adduce 2008; Lippert & Woods 2020, e.g.), or can result from the +instantaneous release of a limited volume of buoyant fluid. In the latter case, dam-break (or +lock-exchange) systems are a common set-up to study the features of the resulting currents +(e.g. Simpson 1972; Huppert & Simpson 1980; Rottman & Simpson 1983; Bonnecaze et al. +1993; Ungarish & Zemach 2005; Ungarish 2007, 2011; Chowdhury & Testik 2011; Khodkar +et al. 2017; Balasubramanian & Zhong 2018; Maggi et al. 2022). The heavier (or lighter) fluid +is kept separated from the ambient by a locked gate, which is suddenly opened to generate +the current. For high Reynolds number flows, the front velocity of the resulting currents +can evolve through different stages (Huppert & Simpson 1980). First is a regime of constant +velocity, called slumping regime, as the current gains inertia thanks to the suspended column +collapse, and which lasts about 5 – 15 lock length depending on the geometry (Rottman & +Simpson 1983; Ungarish & Zemach 2005; Ungarish 2009). If inertia dominates the flow, the +receding rarefaction wave during column slumping hits the backwall and reflects towards +the current nose, modifying its velocity into a so-called inertial regime, where the front +position evolves as 𝑡2/3. The current eventually enters regimes dominated by either viscosity +(Huppert & Simpson 1980), friction and entrainment (Bonnecaze & Lister 1999; Hogg & +Woods 2001) or particle settling velocity (Bonnecaze et al. 1993, 1995; Hallworth et al. +1998; Huppert 1998; Hogg et al. 2000; Harris et al. 2001). +In lock-release systems, these previous studies have focused on the impact of the settling +velocity 𝑢s on the long-term dynamics of the current, typically during the inertial regime, and +after. This results into low values of a Rouse number characterizing the ability of a fluid flow +to delay gravity settling of the particles, and defined here as P = 𝑢s/𝑢c , typically smaller than +10−2 in the experiments of Bonnecaze et al. (1993). Likewise, theoretical studies have also +restrained to asymptotically small P values allowing for analytical development in shallow +water limit particularly (Hogg et al. 2000; Harris et al. 2001). Recently, the study of Ikeda +& Testik (2021) noted increasing deviations of particle-ladden from saline currents in all +propagation regimes as the Rouse number increases. Note that literature on constant inflow +turbidity currents have also quantified significant discrepancies, especially concerning the +volume occupied by the current (Bonnecaze & Lister 1999; Lippert & Woods 2020). +Lock-release homogeneous and turbidity gravity currents on an inclined plane, which +induces an extra-driving force due to the weight of the current, have also been studied in the +literature (Beghin et al. 1981; Rastello et al. 2002; Séon et al. 2005; Birman et al. 2007; +Maxworthy & Nokes 2007; Dai 2013, 2014; Steenhauer et al. 2017). In the latter case, +most studies reported that the current dynamics starts by an acceleration phase, followed +by a deceleration phase corresponding to the inertial regime, in which the current dilution +by water entrainment becomes increasingly important on large slopes. However, at the +author’s knowledge, only the studies of Maxworthy & Nokes (2007) and Birman et al. + +3 +(2007) quantified the impact of the bottom slope on the initial velocity in the slumping +regime, for which the velocity is found to increase with the slope, and rather linearly in the +range [0◦, 20◦]. +In this work, we present lock-release experiments of turbidity currents, where we sys- +tematically vary the initial volume fraction, the bottom slope and the particle diameter (and +thus the settling velocity), thus extending previous works to a larger range of these control +parameters in a unique experimental device. We particularly focus on the slumping regime, +for which we map its existence, and quantify its duration as well as the related current +morphodynamics (velocity, shape) and water entrainment. All along the paper, we also focus +on rationalizing existing results with those obtained in the present study into a relevant +parameter map characterizing the flow regimes. +2. Methods +2.1. Experimental set-ups +In this study, most of the experiments are done using the dam-break experimental set-up +sketched in figure 1(a), later referred to as set-up 1. The tank, 150–cm long (𝐿0 + 𝐿1), and +20–cm wide (𝑊0), is filled with water. The tank is divided in two parts by a sluice gate at +10 cm (𝐿0) from the left side of the tank. It forms a reservoir on the left side of the tank in +which we prepare an initial volume of particle suspension 𝑉0 ≃ 3.9 L by strongly steering +a known mass of particles 𝑚0 within water. Finally, the tank is inclinable at various angles +up to 7◦, and we keep the water height at the gate position constant equal to 20 cm. The +resulting variation of the initial volume 𝑉0 is accounting for, however small compared to +the experimental uncertainties. At the beginning of the experiments, the stirring is stopped, +and the sluice gate opened almost entirely, up to ≈ 1 cm below the water surface to limit +as much as possible the generation of surface waves. The slumping of the column and the +resulting turbidity current are followed by a camera while using a backlight as light source +(see figure 1(c–h)). +In order to further explore the influence of the bottom inclination, an other experimental +set-up is used (set-up 2, see figure 1(b)). Here, the tank can be further inclined thanks to +the presence of a rigid lid covering the water surface, keeping the water height to 50 cm. +Here, 𝐿0 = 10 cm, 𝐿1 = 320 cm and 𝑊0 = 10 cm. Note that in this experimental setup, +the suspension was not filling the entire reservoir height, but around 50 %–75 % of that +height. Nevertheless, the suspension was checked to be homogeneously suspended up to its +maximum height, and the associated initial volume of suspension 𝑉0 was extracted from +images priori opening the gate. Finally, in this set-up, the current is illuminated from the top, +and not using backlighting. +2.2. Parameter space and relevant non-dimensional quantities +Most experiments are done with silica sand grains of diameter 𝑑 ≃ 120 𝜇m. For these +particles, the tank inclination 𝜃 is varied from 0◦ to 7◦ in set-up 1, and from 7◦ to 15◦ +in set-up 2. Then, in set-up 1 and for 𝜃 = 7◦, the particle settling velocity 𝑢s is varied +by using glass beads (©Silibeads) of mean diameter ranging from 60 𝜇m to 250 𝜇m, +corresponding to 𝑢s ∈ [0.3, 3.2] cm s−1. The particle properties are detailed in appendix B. +For all bottom slopes and settling velocities, the initial volume fraction is varied in the range +𝜙 ∈ [0.25, 30] %. The corresponding excess density of the fluid/particle suspension with +respect to the ambient water, Δ𝜌 = 𝜌0 − 𝜌f (where 𝜌0 = 𝜌f + (𝜌p − 𝜌f)𝜙), therefore varied +between 3 kg m−3 and 600 kg m−3. Finally, we also performed experiments with homogeneous +saline (without particles) gravity current in set-up 1, in order to make direct comparison + +4 +θ +L1 +L0 +(a) +θ +L1 +L0 +(b) +t = 1.0 s +(c) +t = 5.0 s +(d) +t = 10.0 s +(e) +t = 1.0 s +(f) +t = 3.0 s +(g) +t = 7.0 s +(h) +Figure 1: (a) Sketch of the experimental set-up. (b–g) Snapshots of experiments using the +silica sand (𝑑 ∼ 120 𝜇m, 𝑢s = 0.74 cm s−1), 𝜃 = 7.2◦ and for initial volume fractions of +(b–d) 𝜙 = 0.87 % and (e–g) 𝜙 = 6.4 % . The orange lines show the extracted current +contours. +between turbidity current and homogeneous currents. For that purpose the density in the +reservoir was varied from 1002 kg m−3 to 1250 kg m−3 to explore the same range of Δ𝜌- +values. This resulted in a total of 169 experimental runs. +Each experiment is characterized by 3 initial quantities, the bottom slope 𝜃, the volume +fraction 𝜙 or equivalently the excess density Δ𝜌 as will be discussed later, and the particle +settling velocity 𝑢s. For saline homogeneous case, only slope 𝜃 and excess density Δ𝜌 then +characterize the system. Note that the initial aspect ratio of the reservoir 𝑎 = ℎ0/𝐿0 is kept +nearly constant in each set-up, equal to 2 in set-up 1 and ≃ 0.67 in set-up 2. Its influence will +be discussed along the paper. Following available literature, we define a velocity scale: +𝑢0 = +√︁ +𝑔′ℎ0, +(2.1) +where ℎ0 = 𝑉0/(𝐿0𝑊0) is the average initial heavy fluid height, and 𝑔′ = 𝑔Δ𝜌/𝜌f the reduced +gravity. Note in the case of turbidity currents, it also writes 𝑔′ = 𝑔(𝜌p − 𝜌f)𝜙/𝜌f, where +𝜌p and 𝜌f are the particle and fluid densities. This velocity scale can be used to define the +Reynolds number R𝑒 and the Rouse number P, as the control dimensionless parameters +based on the initial conditions: +R𝑒 = 𝑢0ℎ0 +𝜈 +, +P = 𝑢s +𝑢0 +, +(2.2) +where 𝜈 is the water kinematic viscosity. In our experiments, we then have R𝑒 ∈ +[2 104, 4 105], and P ∈ [0.004, 0.1]. Note that in lock-release systems, 𝑢0 is the only +velocity scale associated with the gravity current, such that the initial Froude number +reduces to unity for all experiments. On the other hand, we define a Froude number as the +dimensionless current velocity in the slumping regime: +F𝑟 = 𝑢c +𝑢0 +, +(2.3) +where 𝑢c is the current velocity in the slumping (constant-velocity) regime. +Focus on Fluids articles must not exceed this page length + +5 +0 +10 +20 +30 +Time, t [s] +0 +25 +50 +75 +100 +125 +Nose position, x [cm] +(a) +φ [%] +0.5 +0.9 +1.2 +3.0 +6.4 +15.6 +10−1 +100 +101 +Volume fraction, φ [%] +101 +102 +Excess density, ∆ρ [kg m−3] +10−1 +Current velocity, uc [cm/s] +(b) +us [cm/s] +Saline +0.32 ± 0.12 +0.74 ± 0.25 +∆ρ0.5, φ0.5 +104 +105 +Re = u0h0/ν +0.30 +0.35 +0.40 +0.45 +0.50 +Fr = uc/u0 +(c) +θ [◦] +0.9 +7.2 +0 +5 +10 +15 +Bottom slope, θ (◦) +0.2 +0.3 +0.4 +0.5 +0.6 +⟨Fr⟩ +(d) +Figure 2: (a) Current nose position as a function of time for various initial volume +fractions and for a bottom slope 𝜃 = 7◦ and 𝑢s = 0.74 cm s−1 (for clarity purposes, not all +experiments are shown here). The black dashed lines are linear fits on the constant velocity +regime, whose slopes give the current velocity 𝑢c. The gray dashed line indicates the end +of the tank. (b) Current velocity 𝑢c as a function of the excess density and the volume +fraction, for a bottom slope 𝜃 = 7◦ and different settling velocities. (c) Current Froude +number as a function of the initial Reynolds number for two different bottom slopes for +𝑢s = 0.74 cm s−1. (d) Current Froude number averaged over the initial volume fraction as +a function of the bottom slope for 𝑢s = 0.74 cm s−1. Circles corresponds to set-up 1 and +empty squares to set-up 2. Orange diamonds to the data of Maxworthy & Nokes (2007) for +homogeneous saline currents. The black dashed lines are fits of (3.4) to the three previous +datasets, leading to (𝐹𝑟0, 𝐶) equal to (0.36, 3), (0.26, 6) and (0.42, 3), respectively. +3. Current dynamics during the slumping regime +In this section, we focus on the current dynamics and shape during the slumping regime, and +explore the effect of bottom slope and particle settling velocity. +3.1. Nose position and velocity +3.1.1. Slumping behavior +First, we start by tracking the current front position, displayed as a function of time in +figure 2(a) for different initial volume fraction 𝜙 (or equivalently different Δ𝜌) and 𝜃 = 7◦. +After a short acceleration stage, corresponding to the early collapse of the heavy fluid column +dominated by vertical motion, all experiments exhibit a regime where the current propagates +at a constant front velocity 𝑢𝑐 (dashed black lines in figure 2(a)), also known as slumping +regime. + +6 +In this regime, the measured velocity comes from the lossless conversion of the initial +potential energy of the heavy fluid column, Δ𝜌𝑔ℎ0, into horizontal kinetic energy, (1/2)𝜌0𝑢2 +c, +leading to: +𝑢c ∝ +√︄ +Δ𝜌 +𝜌0 +𝑔ℎ0, +(3.1) +where 𝜌0 is the initial heavy fluid density. The prefactor of (3.1) is notably proportional to +√︁ +𝜌0/𝜌 𝑓 (Von Kármán 1940; Benjamin 1968; Shin et al. 2004), leading to +𝑢c ∝ 𝑢0 +(3.2) +As a shown in figure 2(b), the current velocity indeed scales as Δ𝜌1/2 (or equivalently 𝜙1/2), +as expected from (3.2). This also corresponds to a constant Froude number F𝑟 = 𝑢c/𝑢0 as +shown in figure 2(c) (dark blue symbols for 𝜃 = 7◦). Varying the particle settling velocity +while keeping the bottom slope at 7◦ neither impact the scaling of (3.2), or its prefactor (see +figure 2(b)), which remain within 20 % of the one corresponding to homogeneous saline +gravity currents (red symbols). +To conclude, the slumping regime is characterized by a time-independent velocity of +the front evolution, as usually obtained, whose value remains, more surprisingly, nearly +independent of R𝑒 and P in the range of parameters considered here. +3.1.2. Effect of the bottom slope +On the other hand, changing the bottom slope more clearly affects the current velocity. As +shown in figure 2(c), currents on a slope of 7◦ are nearly 30% faster than those on 𝜃 = 1◦. After +averaging across the initial volume fraction 𝜙, for which no clear dependency is observed +as explained in the previous section, we find that the Froude number ⟨F𝑟⟩ increases rather +linearly with the bottom slope 𝜃 (see figure 2(d)). The slope of this linear relationship is +recovered in the experimental set-up 2, although the measured ⟨F𝑟⟩ are generally lower. It +is also recovered in the study of Maxworthy & Nokes (2007), with this time generally higher +Froude numbers. +This increase of the Froude number with the bottom slope is not necessarily obvious. +Indeed, the time-independent velocity in the slumping regime results from a balance between +inertia and pressure gradient, which shall not lead to an increase with slope. On the other +hand, the slope adds a constant forcing term which could be associated with an accelerated +flow unless friction balances this extra force. However, the full balance of these different +terms hardly leads to a constant velocity. Yet, it is clearly observed here experimentally. One +can then understand this slope trend by considering energetic balances and assuming this +velocity to be constant. During the slumping of the column, the current kinetic energy results +from the initial potential energy, but also from the work of the alongslope component of +gravity and friction during this phase. This leads to the following balance: +1 +2 𝜌0𝑢2 +c = 𝐴Δ𝜌𝑔 cos 𝜃ℎ0 + 𝐵Δ𝜌𝑔 sin 𝜃𝐿 − 1 +2𝐶𝑑𝜌0𝑢2 +c +𝐿 +ℎ0 +, +(3.3) +assuming some inertial scale stress and where 𝐿 is characteristic distance over which the +current moves during this early phase. 𝐴 and 𝐵 and 𝐶𝑑 are scale constants accounting for +unknown and local details of the slumping behavior in the bulk. Hence, +F𝑟(𝜃) ≡ +𝑢c +(Δ𝜌/𝜌0)𝑔ℎ0 += +√ +2𝛼 +1 + 𝐶𝑑𝐿/ℎ0 +√ +cos 𝜃 + 𝐶 sin 𝜃 ≡ F𝑟0 +√ +cos 𝜃 + 𝐶 sin 𝜃, +(3.4) +where 𝐶 = (𝐴/𝐵)(𝐿/ℎ0) and F𝑟0 = F𝑟(𝜃 = 0). For small slopes (𝜃 → 0◦), (3.4) can be +approximated by a linear relationship in 𝜃, as described before. + +7 +The fit of (3.4) to the data of set-up 1, set-up 2, and Maxworthy & Nokes (2007), are shown +in figure 2(d). Also (3.4) is able to represent well the data, the fits are poorly constrained due +the small number of experimental points, or the large dispersion in the dataset of Maxworthy +& Nokes (2007). This is especially true for the parameter 𝐶, whose uncertainty is about +100 %, although found to be of the same order of magnitude in all datasets. Note that this +order of magnitude implies that the linearized version of (3.4) could be used up to 𝜃 ∼ 1◦, +hence justifying here the use of the non-linearized form of (3.4). Dedicated experiments +would be required to further study in details the slumping of the column, and its dependence +on the geometry of the experiments, by changing the lock aspect ratio 𝑎 for instance. +The resulting values of the Froude number for 𝜃 = 0◦, much better constrained, highlight +the general discrepancies between the different datasets, which could come from geometrical +differences between the corresponding experimental set-ups. In the experimental set-up of +Maxworthy & Nokes (2007), the heavy fluid is released way below the water surface (partial +depth release), whereas only full-depth released were performed in set-up 1. According to +the predictions of Ungarish & Zemach (2005), this can lead to an increase of the velocity +of almost 50 %, which matches well the discrepancy between the two study. In set-up 2, the +ratio between typical current heights and the tank width is about 1.5, compared to 0.5 in +set-up 1. We therefore expect energy dissipation induced by friction at the walls of the tank +to be much larger, hence explaining the lower measured Froude numbers. +In any case, the measured values are smaller than F𝑟 = 0.5 predicted on a non-inclined +tank by the simple steady model of Benjamin (1968). It is also the case for shallow water +models models including the properties of dam-breaks configurations (Ungarish & Zemach +2005), also including non-boussinesq effects (Ungarish 2007) or the motion of the light fluid +in the upper layer (Ungarish 2011) lead to even larger predicted Froude numbers. +To conclude, the conceptual model (3.4) allows to recover the 𝜃-dependency obtained in +the present configuration, extending available results from the literature mostly focusing on +zero-slope configuration to predict the front velocity of the current. Clearly, such approach +does not allow to discriminate the influence of the particle settling compared to the situation +of a homogeneous saline current. This will be discussed in the following section. +3.2. Existence and duration of the slumping regime +The previous section discussed the value of the current velocity during the constant velocity +(slumping) regime. However, the latter could not be detected in every experiments. In the +following, we discuss its duration and existence with respect to the settling velocity 𝑢s and +excess density Δ𝜌, while keeping the bottom slope at 7◦. +3.2.1. Existence of the constant velocity regime +Figure 3(a) shows the influence of the settling velocity on the nose propagation of currents +at a fixed Δ𝜌 = 45 g cm−3 (𝜙 = 3 %). As discussed in section 3.1, all curves exhibit the +same initial constant velocity at given slope 𝜃, at the exception of the largest settling velocity, +for which no clear constant velocity regime can be observed. For all our experiments, we +classify the cases with (blue dots) or without (orange squares) a constant velocity regime in +figure 3(b), in a (𝑣𝑠,𝑢0) diagram. The cases shown in Figure 3(a) are indicated by a horizontal +green rectangle. It highlights that the loss of a constant velocity regime occurs when particle +settling overcomes the current inertia, with a transition occurring approximately at a Rouse +number P = 𝑣𝑠/𝑢0 ∼ 0.067 (black dashed line in figure 3(b)). +3.2.2. Duration of the constant velocity regime +The duration of the constant velocity regime, i.e. the time 𝑡end at which the front evolution is +observed to deviate from a linear trend with 𝑡, is shown to decrease as the settling velocity + +8 +us [cm/s] +Saline +0.32 ± 0.12 +0.74 ± 0.25 +1.1 ± 0.5 +1.9 ± 0.8 +3.2 ± 1.5 +0 +5 +10 +15 +20 +Time, t [s] +0 +25 +50 +75 +100 +125 +Nose position, x [cm] +(a) +100 +Settling velocity, us [cm/s] +101 +102 +Velocity scale, √g′h0 [cm/s] +(b) +10−1 +100 +Time scale, L0/u0 [s] +101 +End time, tend [s] +(c) +10−2 +P/a = (L0/h0)(us/u0) +10−1 +Rescaled end time, (us/h0)tend +(d) +Figure 3: (a) Current nose position as a function of time for various particle settling +velocities, at a fixed volume fraction, 𝜙 = 3 %. (b) Regime diagram indicating currents for +which the constant velocity regime is detected (blue dots) and those where its not (orange +squares). See section 3.2.1 for more details. The black dashed line indicates a possible +linear regime separation, corresponding to P = 0.067. (c) Ending time of the constant +velocity regime as a function of the current characteristic time scale, for various particle +settling velocities. (b) Same as (a), but with rescaling both axes by a settling time scale +𝑡s = ℎ0/𝑢s. In both subplots, the black dashed line indicates 𝑡end = 28.5𝐿0/𝑢0. On (b), the +vertical dotted line indicates the limit P/𝑎 ≃ 0.033 (see figure 3(b) and section 3.2), and +the horizontal dash-dotted line the limit 𝑡end = 0.5𝑡s. In this figure, experiments are in +set-up 1 (𝑎 = 2) for 𝜃 = 7◦. +increases (see figure 3(a)). In the case of homogeneous saline gravity currents, this duration +is about 𝑡end ≃ 30𝐿0/𝑢0 as shown in figure 3(c). The latter result is in agreement with +previous experiments and shallow water modeling, and corresponds to the duration needed +for the bore (current nose of the upper light fluid layer) to reach the nose of the heavy fluid +current (Rottman & Simpson 1983; Ungarish & Zemach 2005). Note that previous studies +have reported prefactor of 𝑡end ∝ 𝐿0/𝑢0 between 20 and 30, which corresponds to a travel +distance of 7 to 12 reservoir lengths, (Rottman & Simpson 1983; Ungarish & Zemach 2005; +Chowdhury & Testik 2011; Nogueira et al. 2014; Sher & Woods 2015; Ottolenghi et al. +2016). The difference may essentially results from the difficulty in measuring 𝑡end (Rottman +& Simpson 1983; Ungarish 2009; Ottolenghi et al. 2016). +For the smallest glass beads (𝑢s = 0.32 cm s−1), figure 3(c) shows that they behave similarly +to the saline gravity currents, except for slow currents (low volume fractions, high 𝐿0/𝑢0) + +9 +that exhibits smaller 𝑡end. As the settling velocity increases, an increasing number of cases do +not follow this trend, more likely for large values of 𝐿0/𝑢0, until all currents exhibits smaller +𝑡end values. +By using 𝑡s = ℎ0/𝑢s as a characteristic settling time, which corresponds to the time required +for a particle to settle over the initial column height, we obtain a good collapse of the data at +various settling velocity (see figure 3(d)). The resulting trend, whose horizontal axis is now +controlled by the ratio between the Rouse number P and the reservoir aspect ratio 𝑎 = ℎ0/𝐿0, +exhibits a transition between two regimes. For small P/𝑎, the settling is negligible and 𝑡end +scales with 𝐿0/𝑢0, as for saline density currents (black dashed line). For P/𝑎 larger than +0.01, settling and current inertia gradually become of the same order of magnitude. The curve +then transition to a regime entirely controlled by particle settling, 𝑡end ≃ 𝑡s (dash-dotted line). +The trend stops at P/𝑎 ≃ 0.033, the limit over which the constant velocity regime is not +observed anymore. +The data presented in figure 3 comes from experiments performed in set-up 1, for which +𝑎 = 2 is kept constant. As such, it does not allow to asses the relevance of 𝑎 on the variation of +the slumping regime duration. However, by comparing data from set-ups 1 and 2 (different 𝑎) +for the same particles (thus same 𝑢s), one can observe that a better collapse is obtained after +rescaling by the settling time ℎ0/𝑢s (see appendix figure 8). This highlights the relevance +of the lock-aspect ratio 𝑎 in the control of the constant-velocity regime duration. Finally, it +has to be noted that no dependence of 𝑡end with the bottom slope is found in the range of +parameters covered here (see appendix figure 8). The latter result is not necessarily obvious, +as the current velocity in the slumping regime was shown to depend on 𝜃 in the previous +section. However, slope is actually a second order effect here, as explained in the previous +section. In conclusion, the duration of the slumping regime is obtained to depend mainly on +P/𝑎. +4. Current morphology during the slumping regime +In this section, we focus on the current morphology. During the constant velocity regime, +the current shape is found to be defined by an average shape in the frame of the current nose +(blue and orange curves in figure 4(a, b)). Fluctuations around this average profile can be +quantified by the standard deviation as shown in figure 4(d). +4.1. Shape morphometrics +The quantitative characterization of the current shape has always been a challenge in the +literature, aiming, for example, at the extraction of a current characteristic height. When +velocity or density/concentration profiles are accessible, studies have used a height weighted +by buoyancy (Shin et al. 2004; Marino et al. 2005; Cantero et al. 2007; Sher & Woods 2015) +or kinetic energy profiles (e.g. Islam & Imran 2010; Stagnaro & Bolla Pittaluga 2014). When +a single contour is available, the height of the trailing current behind the head have been +widely used, providing it is well defined (e.g. Simpson & Britter 1980; Bonnecaze et al. +1993; Lowe et al. 2005; Chowdhury & Testik 2011). +As shown in figure 4(c), the shape of the observed currents spans from a single head +(low volume fractions) to a continuous current with no distinguishable head lobe (highest +volume fractions). The same qualitative variation is observed between low and high settling +velocities, or for saline homogeneous currents between low and high excess density (not +shown here). +In order to account for all these morphologies, we use the following approach. First, we +fit the theoretical shape of a steady current calculated by Benjamin (1968), to which we add +a free vertical shift to account for the nose (foremost point of the head) height induced by + +10 +0 +20 +40 +60 +Distance behind current nose, x [cm] +0 +10 +Height, h [cm] +hb +(a) +0 +1 +2 +3 +4 +Distance behind current nose, x [cm] +0.0 +2.5 +5.0 +Height, h [cm] +hh +hn +(b) +0 +5 +10 +15 +Mean height, ⟨h⟩t [cm] +Vh +Vt +(c) +Volume fraction, φ [%] +0.49 ± 0.02 +0.87 ± 0.02 +1.17 ± 0.02 +2.97 ± 0.02 +6.42 ± 0.04 +15.55 ± 0.08 +0 +20 +40 +60 +80 +100 +120 +Distance behind current nose, x [cm] +0 +1 +2 +σh [cm] +(d) +Figure 4: (a) Current shape for an experiment with an initial volume fraction 𝜙 = 3 %. +blue lines: all shapes during the constant velocity regime superimposed with transparency. +orange line: temporal average shape. red dashed line: fit of Benjamin’s current shape. +green dashed line: fit of logarithmic shape (4.1). (b) Zoom of (a) on the first centimeters. +The gray rectangle indicate the camera pixel. (c) Average shapes during the constant +velocity regime for various initial volume fractions. (d) Standard deviation corresponding +to the shapes in (c). In (c) and (d), the black dashed lines separates the current head from +its body. Not all experiments are shown for sake of clarity. In this figure, grains are silica +sand with 𝑑 ∼ 120 𝜇m, and the bottom slope is 𝜃 = 7◦. +bottom friction (red dashed lines in figure 4(a, b)). This allows us to extract a current height +ℎb, as well as the current nose height ℎn. While Benjamin’s shape account for the large scale +behavior of the current’s head, it does not reproduce well the curvature close to the nose +(dashed red line on figure 4(c)), and therefore leads to poor estimations of ℎn. However, +we noticed that close to the nose, the current head is well approximated by a portion of a +logarithm: +ℎ(𝑥) = ℎh log +�𝑥 + 𝛿 +𝑥c +� +, +(4.1) +where 𝛿 is a shift parameter, found to be almost constant for all currents, and therefore fixed to +1.4 cm. Here, ℎh gives a characteristic head height representing its geometry, and ℎ(0) ≡ ℎn +the nose height. +Finally, we also noticed that the average current shape can be split in two parts (fig- +ure 4(c,d)). Close to the nose, the head presents little variation during the current propagation +(figure 4(d)), and is also rather invariant with respect to the volume fraction, but also to the +bottom slope and the settling velocity. On the contrary, the tail presents largest temporal +fluctuations induced by shear instabilities, and its morphology largely depends on the volume +Rapids articles must not exceed this page length + +11 +θ [◦] +0.0 ± 0.2 +0.9 ± 0.2 +2.8 ± 0.2 +5.0 ± 0.2 +7.2 ± 0.2 +us [cm/s] +Saline +0.32 ± 0.12 +0.74 ± 0.25 +1.1 ± 0.5 +1.9 ± 0.8 +3.2 ± 1.5 +104 +105 +Re = u0h0/ν +0.0 +0.1 +0.2 +0.3 +0.4 +0.5 +0.6 +hb/h0 +(a) +104 +105 +Re = u0h0/ν +0.00 +0.05 +0.10 +0.15 +hh/h0 +(b) +104 +105 +Re = u0h0/ν +0.00 +0.02 +0.04 +0.06 +0.08 +0.10 +hn/h0 +(c) +104 +105 +Re = u0h0/ν +0.0 +0.1 +0.2 +0.3 +Vh/V0 +(d) +Figure 5: Average shape properties as a function of the bulk Reynolds number for various +bottom slopes and settling velocities. (a) Current height (b) Head height (c) Nose height +(d) Current head volume. +fraction, and the settling velocity (see section 4.2 for further discussion). Accordingly, the +transition between head and tail is defined as change in the standard deviation, found to +increase beyond 10 cm behind the current nose (black dash line in figure 4(d)). The volume +of the current head (per unit of width), 𝑉h, is then calculated on the corresponding 10 cm +behind the current nose (black dash line in figure 4(c)). The volume of the tail (per unit of +width), 𝑉t, is calculated as the total volume minus the head volume 𝑉h. +4.2. Results +The characteristic quantities ℎb, ℎh, ℎn, 𝑉h and 𝑉t are shown in figure 5 figure 6 for all +experimental runs that exhibit a constant velocity regime (see section 3.2.1). As main +observation, all parameters linked to the current head morphology (ℎb, ℎh, ℎn, 𝑉h) are +found to be independent of all experimental parameters, excess density (i.e. R𝑒), settling +velocity (i.e. P) and bottom slope. On the other hand, only the volume of the tail, 𝑉t, is found +to increase with the excess density (i.e. R𝑒), and decrease with the settling velocity (i.e. P). + +12 +105 +Re = u0h0/ν +100 +Vt/V0 +(a) +Vt/V0 ∝ Re +biased points +10−2 +10−1 +P = us/u0 +100 +Vt/V0 +(b) +Vt/V0 ∝ P−1 +Figure 6: Average current tail volume as a function of the bulk Reynolds number for +various bottom slopes and settling velocities. Intermediate slopes are not shown to better +highlight the effect of the settling velocity. On (a), biased points, typically for R𝑒 > 105, +represent runs for which we have little confidence on the tail volume due to the lock +opening (see section 5.2 and appendix A for further discussion). They are removed on (b). +Legend for the colors is the same as in figure 5. +4.2.1. Current height +As shown in figure 5(a), the average current height ℎb is ≃ 0.4 ℎ0, in agreement with previous +studies (Shin et al. 2004; Sher & Woods 2015). All experimental points also lies within the +predictions of Benjamin and single layer shallow water models, ℎb = 0.5 ℎ0, and two-layer +shallow water models, ℎb = 0.35 ℎ0. (Benjamin 1968; Ungarish 2007, 2011). Note that a +slight decrease can be observed for large excess densities, corresponding to large volume +fractions and Reynolds numbers. This could result from non-Boussinesq effects, although +they are predicted to be insignificant in the density ratios range of our experiments by shallow +water models (Ungarish 2007, 2011). +Likewise, we obtain average nose and head heights of ℎh ≃ 0.13 ℎ0 and ℎn ≃ 0.04 ℎ0 +(figure 5(b, c)). Note that here, ℎn/ℎb ≃ 0.1, similar to previous measurements available in +the literature for the same range of Reynolds numbers made on homogeneous density currents +(see figure 11.13 of Simpson (1999), and corresponding measurements of Barr (1967) and +Keulegan (1957)). +Despite the dispersion in our data, it also seems that saline homogeneous currents, and +turbidity currents with the smallest settling velocity have in general higher heights than +turbidity currents with larger settling velocities (see figure 5(a–c)). This could result from +a less dilute interface induced by larger settling velocities, but that would require dedicated +experiments. Finally, we could not see any clear effect of the bottom slope in the studied +experimental parameter range. +4.2.2. Current volume +While the current head volume is constant, 𝑉h ≃ 0.25𝑉0 (see figure 5(d)), the tail volume +increases with the Reynolds number (see figure 6(a)). For saline currents and the smallest +settling velocity, this increase is rather linear. However, increasing the settling velocity, leads +to smaller values of 𝑉t/𝑉0, which may correspond to a different slope of this relationship, +and/or to a different power law. A good collapse is obtained by plotting the experimental +data as a function of the Rouse number, for which we find 𝑉t/𝑉0 ∝ P−1 (see figure 6(b))). +The volume increase cannot be driven solely by the Rouse number, as this would imply that + +13 +saline gravity currents, for which P = 0, would have a constant 𝑉t/𝑉0 value, which is not the +case. This means that 𝑉t/𝑉0 depends on both R𝑒 and P. +While most currents have 𝑉t/𝑉0 ⩾ 1, suggesting the presence of water entrainment, +currents for P > 0.06 have 𝑉t/𝑉0 ⩽ 1, suggesting the dominance of particle settling. Then, +the dependence of 𝑉t/𝑉0 ⩾ 1 with (R𝑒, P) has to be related with entrainment, which is +discussed in section 5. +5. Water entrainment +5.1. Parametrization and hypotheses +Here, we consider a fixed observation window starting at the lock gate and ending at the end +of the illuminated area (see figure 1). In this zone, the continuity equation for the current +volume 𝑉 (per unit of width) can be written as: +d𝑉 +d𝑡 = 𝑄𝑒 − 𝑄𝑠 + 𝑄in, +(5.1) +where 𝑄𝑒 and 𝑄𝑠 are fluxes induced by water entrainment and particle settling respectively. +As the observation window does not take into account what is inside the lock, an input flux +𝑄in must be taken into account as long as part of the suspension is transferred from the lock +to the current. +The entrainment flux can then be written as the quantity of water passing through the +interfacial line between the current and the ambient, Γ, at a velocity 𝑤𝑒: +𝑄𝑒 = 𝑤𝑒Γ, +(5.2) +where 𝑤𝑒 = 𝐸𝑢c is the entrainment velocity (Jacobson & Testik 2014) and 𝐸 the entrainment +coefficient. +As shown in figure 7(a), the evolution of the current volume through time can be split +into 3 phases. Just after the lock opens, the volume increases due to the inflow at the +upstream boundary (lock gate) induced by the column collapse (phase 1). After the tank is +emptied, only entrainment and settling remain, during which the current volume increase +becomes slower (phase 2). As the current increases its volume (entrainment) and looses some +particules (settling), it dilutes up to a point where it gradually passes below the detection +threshold chosen for the contour extraction (see figure 1(c–h)). At this point, the current +volume starts to decrease, and (5.1) is not applicable anymore. The ubiquitous presence +of settling, combined to the observed diversity of cases, makes it difficult to distinguish +whether volume increase is solely due to entrainment. Therefore, we decide to compute a +bulk entrainment parameter, by considering the volume difference between the maximum +volume observed and the initial volume in the reservoir. We assume that, at this moment, the +reservoir has completely emptied, and that the current velocity is still large enough to neglect +settling processes. Following previous studies (Cenedese & Adduce 2008; Nogueira et al. +2014; Jacobson & Testik 2014; Wilson et al. 2017), the entrainment coefficient therefore +reads: +𝐸 = 1 +𝑢c +d𝑉 +d𝑡 +1 +Γ = d𝑉 +d𝑥 +1 +Γ. +(5.3) +Following the mentioned literature, the interfacial length Γ is also taken at the time when the +current volume reaches its maximum. +5.2. Bulk entrainment coefficient +The resulting entrainment coefficients are shown in figure 7(b) as a function of the Reynolds +number, along with data from previous studies. Disregarding biased points (see below, and + +14 +appendix A), an increasing trend with the Reynold number is visible. On the other hand, no +clear impact of the bottom slope or the settling velocity (or here equivalently particle size) is +found in the studied range. Note that for Reynolds numbers in the range 104–3.104, volume +variations induced by entrainment and by fluctuations during the propagation are on the +same order of magnitude, leading to large uncertainties in the estimation of the entrainment +coefficient. +At large Reynolds number (typically R𝑒 > 105), entrainment saturates to a constant +value. However, we attribute this to bias induced when releasing the initial reservoir. For +the corresponding runs, the release velocity is too small compared to the current velocity. +This results in the mixing of a significant portion of the reservoir with the ambient fluid +brought back by the overlying backflow (see appendix A for further description). As such, +these currents are still fed by an input flux, even though the slumping regime, controlled by +the current head properties (formed before the opening induced mixing occurs), is over. This +results in a constant measured maximum volume, which corresponds to the product of the +observation zone length and the current height. +Nevertheless, as shown by figure 7(b), our data agrees well with previous experimental +studies on both saline (Nogueira et al. 2014; Ottolenghi et al. 2016; Balasubramanian & +Zhong 2018) and turbidity currents (Jacobson & Testik 2014; Wilson et al. 2017), suggesting +a dominant correlation between entrainment and R𝑒. Surprisingly, Wilson et al. (2017) found +constant entrainment values matching the saturation induced by the bias of our data at R𝑒 > +105. Note that the data of Balasubramanian & Zhong (2018) have been obtained by a direct +method based on buoyancy fluxes, which further validates the entrainment parametrization +used in this and other studies. Despite the dispersion within each datasets, we however find +slightly larger entrainment coefficients. Note that the absolute value of our results depends on +the chosen threshold for the current contour extraction, which can lead to a volume variations +corresponding to a vertical downward shift of 𝐸 ≃ 10−2 of the data. +6. Conclusion +In the present study, we investigate the slumping regime, characterized as the front constant- +velocity regime, of lock-release turbidity currents using an experimental approach. In +particular, we explore systematically the influence of volume fraction, bottom slope and +particle settling velocity, which remains relatively sparse and scattered in the existing +literature. We define the associated independent dimensionless parameters as the Reynolds +number R𝑒, the Rouse number P and the slope 𝜃. Direct comparison is also made with saline +homogeneous gravity currents for which P ≡ 0. +First, we focus on the nose dynamics. We show that, in the explored parameter range, saline +and turbidity currents exhibit a constant velocity regime, i.e. a slumping regime, providing a +Rouse number larger than P ≲ 0.02, independently of the bottom slope. During this regime, +the non-dimensional velocity scales as +√︁ +𝑔′ℎ0 as expected. The corresponding current Froude +numbers are systematically smaller than those predicted by Benjamin (1968), but also from +shallow water models including the properties of lock-release systems (Ungarish 2007, 2011). +Previous measurements on saline and turbidity currents have reported Froude numbers in +better agreement with these theories (e.g. Shin et al. 2004; Lowe et al. 2005; Nogueira et al. +2014; Sher & Woods 2015), but also smaller ones in agreement with those measured in this +study (Longo et al. 2018; Balasubramanian & Zhong 2018). More interestingly, the current +Froude numbers are found to increase with the bottom slope 𝜃, of about 35 % between 0◦ +and 15◦, while being independent on both R𝑒 and P in the range of parameters considered. +Surprisingly, a similar increase of the maximum front velocity with the bottom slope has +been observed in the literature in the case of granular collapses (Mangeney et al. 2010; + +15 +0 +20 +40 +60 +80 +Nose Position [cm] +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +1.2 +1.4 +Current volume, V/V0 +1: Qin + Qe − Qs +2: Qe − Qs +3: ? +(a) +door opening +slumping regime +equation (5.1) +104 +105 +Reynolds number, R0 = ρ0 +� +g′ +0H0H0/η +10−3 +10−2 +Entrainment coefficient, E +(b) +E ∝ R0 +biased points +Saline currents +Balasubramanian et al. 2018 +Ottolenghi et al. 2016 +Nogueira et al. 2014 +Turbidity currents +Wilson et al. 2017 +Jacobson et al. 2014 +Figure 7: (a) Current volume as a function of its nose position for 𝜃 = 7.2◦, +𝑢s = 0.32 cm s−1 and 𝜙 = 0.24 %. (b) Water entrainment coefficient as a function of the +bulk Reynolds number. Biased points, typically for R𝑒 > 105, represent runs for which we +have little confidence on the calculated entrainment due to the lock opening (see +section 5.2 and appendix A for further discussion). Not all errorbars ar shown for sake of +clarity. The legend of the colored circles is the same as figure 5. +Martin et al. 2017). Even if slope dependent, this constant regime is still attributed here to +a slumping regime including inertia and pressure gradient, and not to a frictional-buoyancy +equilibrium which would be obtained at larger slope and longer times (Britter & Linden +1980). This regime of constant velocity is therefore not trivial, as the alongslope component +of gravity could induce a constant acceleration of the current during its propagation. Even +if no theoretical background is available to support the observed constant velocity regime, a +relevant energetic balance including the work of alongslope weight and friction during the +column slumping is found here to provide the relevant slope effect on the front velocity. +While the settling velocity, i.e. the Rouse number P, do not affect the current velocity, it +can affect the duration of the slumping (constant velocity) regime, which on the other hand +does not depend on the bottom slope 𝜃. It is found that the appropriate parameter to study +this duration is P/𝑎. At low P/𝑎 values numbers, typically P/𝑎 ≲ 0.01, the current velocity +starts to decrease after 𝑡end ≃ 30𝐿0/𝑢0, as for saline currents. However, at larger values, the + +16 +duration decreases, up to a point where it becomes fully controlled by a settling characteristic +time, typically as 𝑡end ≃ 0.5ℎ0/𝑢s. This last scaling is only valid up to the point where the +constant velocity regime is not detected anymore, P/𝑎 ≳ 0.033. +Second, we focus on the current morphology during the slumping regime. We show that +the current shape is mostly constant in the frame of the current nose. A striking property of +this shape relates to the head (∼ 10 first centimeters), which is found to be independent of all +experimental parameters, excess density, bottom slope and settling velocity, and characterized +by small fluctuations around the average shape. This further supports the work of Benjamin +(1968), that extracts from energy consideration at the head scale a head shape independent +of the flow velocity, assuming the absence of friction. Furthermore, above the sublayer +induced by bottom friction, the head shape is well approximated by the theoretical shape of +Benjamin’s current. Close to the nose, the head is further curved downward, presumably due +to the influence of bottom friction, and better approximated by a portion of logarithm. +On the other hand, the tail of the current highlights more significant fluctuations around the +average shape, suggesting more significant entrainment in this part of the current. Moreover, +the volume of the tail increases with the excess density and decreases with the settling +velocity, suggesting a strong interplay between entrainment and settling during the slumping +regime. We further investigate entrainment by calculating entrainment coefficients from the +maximum volume reached during an experiment. Even if scattered, the resulting entrainment +coefficient is shown to increase rather linearly with R𝑒, in agreement with compiled literature +data on both turbidity and saline currents, and to remain independent on P and 𝜃. Note that a +number of assumptions have been made in the parametrization of the entrainment coefficients. +In order to validate these results, and more specifically focus on the influence of the settling +velocity, dedicated experiments with more accurate measurements on the density should be +carried, similarly to Balasubramanian & Zhong (2018). +This work therefore shows that, as long as P/𝑎 ≲ 0.01, the measured current morphody- +namics are not impacted by the presence of particles during the slumping regime. The current +velocity (𝑢c = F𝑟 (𝜃) +√︁ +𝑔′ℎ0), height (ℎb ≃ 0.4ℎ0), regime duration (𝑡end ≃ 30𝐿0/ +√︁ +𝑔′ℎ0) and +bulk entrainment coefficients (𝐸 ∝ R𝑒) are in agreement with previous measurements on +saline density currents (Sher & Woods 2015; Ottolenghi et al. 2016; Balasubramanian & +Zhong 2018). This supports their modeling as an average fluid of different density. However, +an influence of the slope 𝜃 has been clearly identified here on the dynamics of the current +for both saline and turbidity currents. In this case, the departure from the slumping regime +is to be attributed to the finite reservoir influence, 𝑎-dependent, often reported in gravity +current and referred to as the inertial regime. On the other hand, when P/𝑎 ≳ 0.01, the +current dynamics differs due to the presence of the settling particles. In particular, the +regime duration is found to strongly decreases with P/𝑎, while more surprisingly, the front +velocity remains unaffected. The duration of the regime results from a complex balance +between entrainment and settling. The former has been shown to be only dependent on R𝑒 +while the latter is intrinsically quantified by P. The resulting duration depends therefore on +both as reported experimentally here. In this case, the departure from the slumping regime +is attributed to particle settling, P-dependent, which in this case would require dedicated +modeling, similarly to non-newtonian effects in the fluid rheology for large particle volume +fractions (Chowdhury & Testik 2011; Jacobson & Testik 2014). +Acknowledgements. We would like to acknowledge the contributors of the open-source python librairies, +including Matplotlib (Hunter 2007), Numpy (Harris et al. 2020) and Scipy (Virtanen et al. 2020), which +provide an incredibly efficient ecosystem allowing scientific research in Python. We also thank Jean- +Dominique Barron (IMFT) and Sébastien Cazin (IMFT) for their support in carrying the experiments. + +17 +θ [◦], set-up 1 +0.00 ± 0.20 +0.89 ± 0.20 +2.83 ± 0.20 +4.98 ± 0.20 +7.24 ± 0.20 +θ [◦], set-up 2 +7.00 ± 0.10 +10.00 ± 0.10 +15.00 ± 0.10 +10−1 +100 +Time scale, L0/u0 [s] +101 +End time, tend [s] +(a) +10−2 +P/a = (L0/h0)(us/u0) +10−1 +Rescaled end time, (us/h0)tend +(b) +Figure 8: Ending time of the constant velocity regime as a function of the current +characteristic time scale, for various bottom slopes in the two experimental set-ups. Here, +the settling velocity is 𝑢s = 0.74 cm s−1 +Funding. We acknowledge financial support from the French National Research Agency Grants, ANR-19- +CE30-0041/PALAGRAM. +Declaration of interests. The authors report no conflict of interest. +Data availability statement. The data that support the findings of this study are openly available in Zenodo +at https://doi.org/10.5281/zenodo.7487190. +Author ORCID. C. Gadal, https://orcid.org/0000-0002-2173-5837; M. Mercier, https://orcid.org/0000- +0001-9965-3316; L. Lacaze, https://orcid.org/0000-0002-4945-7445 +Appendix A. Entrainment induced by the door opening +In section 5.2, we observed that the measured entrainment coefficient saturates to a constant +value for R𝑒 > 105. This is attributed to the opening of the lock gate, snapshots of which are +shown in figure 9. +A first source of entrainment is induced at the beginning of the gate opening. As shown by +figure 9(a–c), at the beginning, the tank empties as the suspension flows out of the bottom +with no opportunity for the ambient fluid to create a counter-current at the top (the locked +door is impermeable). As soon as the gate has opened higher than the height of the current +(figure 9(d, e)), the ambient fluid creates a counter-current just above the turbidity current, +thus mixing the ambient fluid with the suspension and refilling the lock. Note that this first +mechanism induces a dilution of about ≲ 10 % of the suspension behind the lock (inferred +from the lock volume to be filled on figure 9(c)). +A second source of entrainment occurs when the suspension column begins to collapse. +As shown by figure 9(f–h), the column collapse begins at the level of the bottom of the door, +not properly at the top of the lock. This creates an intrusion of ambient fluid inside the lock +surrounded at the top and bottom by the suspension (figure 9(g)). This unstable situation is +quickly resolved by the collapse of the upper part of the suspension, which mixes with the +ambient fluid below (figure 9(h, i)). The result, at the end of the gate opening, is a full lock + +18 +(a) +(b) +air +(c) +counter +current +(d) +(e) +(f) +counter +current +(g) +(h) +(i) +(j) +Figure 9: Close up on the opening of the door for an experiment made with silica sand +(𝑑 ∼ 120𝜇m, 𝑢s = 0.74 cm s−1) for 𝜙 ∼ 8 %. +of suspension at a smaller volume fraction than the initial one, although a large volume of +suspension has already been released into the turbidity current. +For these runs, the reservoir therefore becomes much larger than its initial volume, causing +the resulting currents to fill the entire length of the tank. The corresponding maximum +current volumes are therefore constant, corresponding approximately to ℎh𝐿1, and so are +the corresponding entrainment coefficients. Note that the current head, which controls the +current dynamics during the slumping regime, has the appropriate initial volume fraction +since it forms before the second mechanism occurs. +Appendix B. Grain properties +B.1. Grain size distributions +The particle size distributions are obtained by taking pictures of the grains using a microscope. +The resulting images are segmented using the CellPose algorithm (Pachitariu & Stringer +2022), leading to a collection of planar shapes for each particle type. For each shape, three +different diameters are calculated: an average diameter assuming a circular shape, and the +major and minor axis of the ellipse that has the same normalized second central moments as +the selected shape. +The resulting distribution are shown in figure 10. For the glass beads, all three diameters +exhibits similar distributions with matching modes. For the silica sand, the average diameter +is between the minor and major axes of the corresponding ellipse. Note that the measurements +for the Silibeads 200–300 𝜇m are lacking, due to problems with the microscope. However, +for the glass beads, the measured distributions are in fairly good agreement with the range +given by the manufacturer. Therefore, for the Silibeads 200–300 𝜇m, we take 𝑑 = 250 𝜇m. +B.2. Settling velocity +The particle settling velocity is calculated from the equilibrium between buoyancy, +𝑓g = 1 +6𝜋𝜌p − 𝜌f)𝑔𝑑3, +(B 1) + +19 +50 +100 +150 +Diameter, d [µm] +0.00 +0.25 +0.50 +0.75 +1.00 +Counts +×103 +(a) +d = 64 ± 13 µm +average +major axis +minor axis +100 +200 +300 +400 +Diameter, d [µm] +0.0 +0.5 +1.0 +1.5 +Counts +×104 +(b) +d = 116 ± 19 µm +average +major axis +minor axis +0 +100 +200 +300 +400 +Diameter, d [µm] +0 +2 +4 +Counts +×102 +(c) +d = 135 ± 32 µm +average +major axis +minor axis +0 +100 +200 +300 +400 +500 +Diameter, d [µm] +0 +2 +4 +6 +8 +Counts +×102 +(d) +d = 187 ± 49 µm +average +major axis +minor axis +Figure 10: Grain size distributions for the particles used in the paper: (a) Silibeads 40–70 +𝜇m, (b) Sand 120 𝜇m (c) Silibeads 100–200 𝜇m (d) Silibeads 150–250 𝜇m. The plain +lines are fits of log normal distributions, and the modal value of the average diameter +distribution is shown at the upper right of each subplot. +and the drag force: +𝑓d = 1 +8 𝜌f𝑢2 +s 𝜋𝑑2𝐶D, +(B 2) +where 𝐶D is a the drag coefficient, function of the particle Reynolds number Rp = 𝑢s𝑑/𝜈 +and therefore of the setting velocity. Various forms of the drag coefficient can be found in +the literature (van der Hoef et al. 2008). Here, we follow the approach of Camenen (2007) +by writing the drag coefficient under the form: +𝐶D = +�� 𝐴 +Rp +�1/𝑚 ++ 𝐵1/𝑚 +�𝑚 +, +(B 3) +where 𝐴 and 𝐵 are two constants that depends on the particle shape. Balancing the two forces +therefore leads to the following expression of the settling velocity: +𝜈 +𝑑 𝑢s = +������ +√︄ +1 +4 +� 𝐴 +𝐵 +�2/𝑚 ++ +�4 +3 +𝑑3∗ +𝐵 +�1/𝑚 +− 1 +2 +� 𝐴 +𝐵 +�1/𝑚������ +𝑚 +, +(B 4) +where 𝑑∗ = ((𝑠−1)𝑔/𝜈2)2/3𝑑 is a dimensionless particle diameter, and 𝑠 = 𝜌p/𝜌f. Following +the empirical calibration by Camenen (2007), we use 𝐴 = 24, 𝐵 = 0.4 and 𝑚 = 1.92, which +corresponds to spherical particles. +To check the calculated settling velocities, we use a simple experimental set-up in which we +put the particles in suspension in a fluid column by strongly stirring, and then follow the front +of the suspension as the particle sediments. As shown by figure 11, the calculated settling +velocities matches the experimental ones for dilute enough volume fractions. However, the +measured settling velocity decreases with the volume fraction as previously observed in the + +20 +10−2 +10−1 +Volume fraction, φ +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +1.2 +1.4 +Non-dimensional settling velocity, v/us +(1 − φ)3 +Sand 120 µm +Silibeads 100–200 µm +Silibeads 150–250 µm +Silibeads 200–300 µm +Silibeads 40–70 µm +Figure 11: Non-dimensional measured particle velocity as a function of the particle +volume fraction. Note that the errorbars essentially come from the uncertainty in the +calculation of 𝑢s from (B 4), inherited from the parameter uncertainties (grain size, water +viscosity, densities). +literature (Richardson & Zaki 1954). Note that the observed decrease is faster than the typical +correction in (1 − 𝜙)1/3 proposed by Richardson & Zaki (1954), especially at low volume +fractions. According to Di Felice (1995), the Richardson & Zacki regime is only reached for +volume fractions larger than 10 %. For more dilute suspensions, the decrease of the settling +velocity with 𝜙 is stronger. Thus, we leave out this complex dependence on particle volume +fraction, and restrict ourselves to the settling velocities calculated using (B 4). +REFERENCES +Baines, Peter G 2001 Mixing in flows down gentle slopes into stratified environments. Journal of Fluid +Mechanics 443, 237–270. +Balasubramanian, Sridhar & Zhong, Qiang 2018 Entrainment and mixing in lock-exchange gravity +currents using simultaneous velocity-density measurements. Physics of Fluids 30 (5), 056601. +Barr, D.I.H. 1967 Densimetric exchange flow in rectangular channels. La Houille Blanche (6), 619–632. +Beghin, P., Hopfinger, E. J. & Britter, R. E. 1981 Gravitational convection from instantaneous sources +on inclined boundaries. Journal of Fluid Mechanics 107 (-1), 407–422. +Benjamin, T Brooke 1968 Gravity currents and related phenomena. Journal of fluid mechanics 31 (2), +209–248. +Birman, V. K., Battandier, B. A., Meiburg, E. & Linden, P. F. 2007 Lock-exchange flows in sloping +channels. Journal of Fluid Mechanics 577, 53–77. +Bonnecaze, Roger T, Hallworth, Mark A, Huppert, Herbert E & Lister, John R 1995 Axisymmetric +particle-driven gravity currents. Journal of Fluid Mechanics 294, 93–121. +Bonnecaze, Roger T, Huppert, Herbert E & Lister, John R 1993 Particle-driven gravity currents. +Journal of Fluid Mechanics 250, 339–369. +Bonnecaze, Roger T & Lister, John R 1999 Particle-driven gravity currents down planar slopes. Journal +of Fluid Mechanics 390, 75–91. + +21 +Borden, Zachary & Meiburg, Eckart 2013 Circulation based models for boussinesq gravity currents. +Physics of Fluids 25 (10), 101301. +Britter, R. E. & Linden, P. F. 1980 The motion of the front of a gravity current travelling down an incline. +Journal of Fluid Mechanics 99 (3), 531–543. +Camenen, Benoît 2007 Simple and general formula for the settling velocity of particles. Journal of +Hydraulic Engineering 133 (2), 229–233. +Cantero, Mariano I., Lee, J. R., Balachandar, S. & Garcia, Marcelo H. 2007 On the front velocity of +gravity currents, , vol. 586. +Cantero, Mariano I., Shringarpure, Mrugesh & Balachandar, S. 2012 Towards a universal criteria +for turbulence suppression in dilute turbidity currents with non-cohesive sediments. Geophysical +Research Letters 39 (14), 1–5. +Carter, Lionel, Gavey, Rachel, Talling, Peter J & Liu, James T 2014 Insights into submarine +geohazards from breaks in subsea telecommunication cables. Oceanography 27 (2), 58–67. +Cenedese, Claudia & Adduce, Claudia 2008 Mixing in a density-driven current flowing down a slope +in a rotating fluid. Journal of Fluid Mechanics 604, 369–388. +Chowdhury, MR & Testik, FY 2011 Laboratory testing of mathematical models for high-concentration +fluid mud turbidity currents. Ocean Engineering 38 (1), 256–270. +Clare, Michael, Lintern, D Gwyn, Rosenberger, Kurt, Hughes Clarke, John E, Paull, Charles, +Gwiazda, Roberto, Cartigny, Matthieu JB, Talling, Peter J, Perara, Daniel, Xu, Jingping +& others 2020 Lessons learned from the monitoring of turbidity currents and guidance for future +platform designs. Geological Society, London, Special Publications 500 (1), 605–634. +Dai, Albert 2013 Experiments on gravity currents propagating on different bottom slopes. Journal of Fluid +Mechanics 731, 117–141. +Dai, Albert 2014 Non-Boussinesq gravity currents propagating on different bottom slopes. Journal of +Fluid Mechanics 741, 658–680. +Di Felice, Renzo 1995 Hydrodynamics of liquid fluidisation. Chemical engineering science 50 (8), 1213– +1245. +Dobran, Flavio, Neri, Augusto & Todesco, Micol 1994 Assessing the pyroclastic flow hazard at +vesuvius. Nature 367 (6463), 551–554. +Hallworth, Mark A, Hogg, Andrew J & Huppert, Herbert E 1998 Effects of external flow on +compositional and particle gravity currents. Journal of Fluid Mechanics 359, 109–142. +Harris, Charles R, Millman, K Jarrod, van der Walt, Stéfan J, Gommers, Ralf, Virtanen, Pauli, +Cournapeau, David, Wieser, Eric, Taylor, Julian, Berg, Sebastian, Smith, Nathaniel J & +others 2020 Array programming with numpy. Nature 585, 357–362. +Harris, Thomas C, Hogg, Andrew J & Huppert, Herbert E 2001 A mathematical framework for the +analysis of particle–driven gravity currents. Proceedings of the Royal Society of London. Series A: +Mathematical, Physical and Engineering Sciences 457 (2009), 1241–1272. +van der Hoef, Martin Anton, van Sint Annaland, M, Deen, NG & Kuipers, JAM 2008 Numerical +simulation of dense gas-solid fluidized beds: a multiscale modeling strategy. Annu. Rev. Fluid Mech. +40, 47–70. +Hogg, Andrew J, Ungarish, Marius & Huppert, Herbert E 2000 Particle-driven gravity currents: +asymptotic and box model solutions. European Journal of Mechanics-B/Fluids 19 (1), 139–165. +Hogg, Andrew J. & Woods, Andrew W. 2001 The transition from inertia-to bottom-drag-dominated +motion of turbulent gravity currents. Journal of Fluid Mechanics 449, 201–224. +Hunter, J. D. 2007 Matplotlib: A 2d graphics environment. Computing in science & engineering 9, 90–95. +Huppert, Herbert E 1998 Quantitative modelling of granular suspension flows. Philosophical Transactions +of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences +356 (1747), 2471–2496. +Huppert, Herbert E & Simpson, John E 1980 The slumping of gravity currents. Journal of Fluid Mechanics +99 (4), 785–799. +Ikeda, Jin & Testik, Firat Y 2021 Propagation, deposition, and suspension characteristics of constant- +volume particle-driven gravity currents. Environmental Fluid Mechanics 21 (1), 177–208. +Islam, M Ashraful & Imran, Jasim 2010 Vertical structure of continuous release saline and turbidity +currents. Journal of Geophysical Research: Oceans 115 (C8). +Jacobson, MR & Testik, FY 2014 Turbulent entrainment into fluid mud gravity currents. Environmental +Fluid Mechanics 14 (2), 541–563. + +22 +Keulegan, GH 1957 An experimental study of the motion of saline water from locks into fresh water +channels. Nat. Bur. Stand. Rept. Technical Report 5168. +Khodkar, MA, Nasr-Azadani, MM & Meiburg, E 2017 Partial-depth lock-release flows. Physical Review +Fluids 2 (6), 064802. +Konopliv, NA, Smith, Stefan G Llewellyn, McElwaine, JN & Meiburg, E 2016 Modelling gravity +currents without an energy closure. Journal of Fluid Mechanics 789, 806–829. +Lippert, Martin C & Woods, Andrew W 2020 Experiments on the sedimentation front in steady particle- +driven gravity currents. Journal of Fluid Mechanics 889. +Longo, S., Ungarish, M., Di Federico, V., Chiapponi, L. & Petrolo, D. 2018 Gravity currents produced +by lock-release: Theory and experiments concerning the effect of a free top in non-Boussinesq +systems. Advances in Water Resources 121 (July), 456–471. +Lowe, Ryan J, Rottman, James W & Linden, PF 2005 The non-boussinesq lock-exchange problem. part +1. theory and experiments. Journal of Fluid Mechanics 537, 101–124. +Maggi, Maria Rita, Adduce, Claudia & Negretti, Maria Eletta 2022 Lock-release gravity currents +propagating over roughness elements. Environmental Fluid Mechanics pp. 1–20. +Mangeney, A, Roche, Olivier, Hungr, O, Mangold, N, Faccanoni, Gloria & Lucas, A 2010 Erosion +and mobility in granular collapse over sloping beds. Journal of Geophysical Research: Earth Surface +115 (F3). +Marino, BM, Thomas, LP & Linden, PF 2005 The front condition for gravity currents. Journal of Fluid +Mechanics 536, 49–78. +Martin, Nathan, Ionescu, IR, Mangeney, Anne, Bouchut, François & Farin, Maxime 2017 Continuum +viscoplastic simulation of a granular column collapse on large slopes: 𝜇 (i) rheology and lateral wall +effects. Physics of Fluids 29 (1), 013301. +Maxworthy, T. & Nokes, R. I. 2007 Experiments on gravity currents propagating down slopes. Part 1. +The release of a fixed volume of heavy fluid from an enclosed lock into an open channel. Journal of +Fluid Mechanics 584, 433–453. +Meiburg, Eckart, Blanchette, F., Strauss, M., Kneller, B., Glinsky, M. E., Necker, F., Härtel, C. & +Kleiser, L. 2005 High resolution simulations of particle-driven gravity currents. American Society +of Mechanical Engineers, Fluids Engineering Division (Publication) FED 261 FED, 381–390. +Nogueira, Helena IS, Adduce, Claudia, Alves, Elsa & Franca, Mário J 2014 Dynamics of the head +of gravity currents. Environmental Fluid Mechanics 14 (2), 519–540. +Ottolenghi, Luisa, Adduce, Claudia, Inghilesi, Roberto, Armenio, Vincenzo & Roman, Federico +2016 Entrainment and mixing in unsteady gravity currents. Journal of Hydraulic Research 54 (5), +541–557. +Pachitariu, Marius & Stringer, Carsen 2022 Cellpose 2.0: how to train your own model. Nature methods +19 (12), 1634–1641. +Rastello, M, Ancey, C, Ousset, F, Magnard, R & Hopfinger, E J 2002 An experimental study of +particle-driven gravity currents on steep slopes with entrainment of particles. Natural Hazards and +Earth System Sciences 2 (3-4), 181–185. +Richardson, JF & Zaki, WN 1954 The sedimentation of a suspension of uniform spheres under conditions +of viscous flow. Chemical Engineering Science 3 (2), 65–73. +Rottman, James W & Simpson, John E 1983 Gravity currents produced by instantaneous releases of a +heavy fluid in a rectangular channel. Journal of Fluid Mechanics 135, 95–110. +Sher, Diana & Woods, Andrew W 2015 Gravity currents: entrainment, stratification and self-similarity. +Journal of Fluid Mechanics 784, 130–162. +Shin, JO, Dalziel, SB & Linden, PF 2004 Gravity currents produced by lock exchange. Journal of Fluid +Mechanics 521, 1–34. +Simpson, JE & Britter, RE 1980 Experiments on the dynamics of the front of a gravity current. J. Fluid +Mech 88, 223–240. +Simpson, John E 1972 Effects of the lower boundary on the head of a gravity current. Journal of Fluid +Mechanics 53 (4), 759–768. +Simpson, John E 1999 Gravity currents: In the environment and the laboratory. Cambridge university press. +Stagnaro, M & Bolla Pittaluga, Michele 2014 Velocity and concentration profiles of saline and turbidity +currents flowing in a straight channel under quasi-uniform conditions. Earth Surface Dynamics 2 (1), +167–180. +Steenhauer, K., Tokyay, T. & Constantinescu, G. 2017 Dynamics and structure of planar gravity currents +propagating down an inclined surface. Physics of Fluids 29 (3), 036604. + +23 +Stethem, Chris, Jamieson, Bruce, Schaerer, Peter, Liverman, David, Germain, Daniel & Walker, +Simon 2003 Snow avalanche hazard in canada–a review. Natural Hazards 28 (2), 487–515. +Séon, T., Hulin, J.-P., Salin, D., Perrin, B. & Hinch, E. J. 2005 Buoyancy driven miscible front dynamics +in tilted tubes. Physics of Fluids 17 (3), 031702. +Ungarish, Marius 2007 A shallow-water model for high-reynolds-number gravity currents for a wide range +of density differences and fractional depths. Journal of Fluid Mechanics 579, 373–382. +Ungarish, Marius 2009 An introduction to gravity currents and intrusions. Chapman and Hall/CRC. +Ungarish, Marius 2011 Two-layer shallow-water dam-break solutions for non-boussinesq gravity currents +in a wide range of fractional depth. Journal of fluid mechanics 675, 27–59. +Ungarish, M & Zemach, T 2005 On the slumping of high reynolds number gravity currents in two- +dimensional and axisymmetric configurations. European Journal of Mechanics-B/Fluids 24 (1), +71–90. +Virtanen, Pauli, Gommers, Ralf, Oliphant, Travis E, Haberland, Matt, Reddy, Tyler, Cournapeau, +David, Burovski, Evgeni, Peterson, Pearu, Weckesser, Warren, Bright, Jonathan & others +2020 Scipy 1.0: fundamental algorithms for scientific computing in python. Nature methods 17, +261–272. +Von Kármán, Theodore 1940 The engineer grapples with nonlinear problems. Bulletin of the American +Mathematical Society 46 (8), 615–683. +Wilson, Richard I, Friedrich, Heide & Stevens, Craig 2017 Turbulent entrainment in sediment-laden +flows interacting with an obstacle. Physics of Fluids 29 (3), 036603. + diff --git a/itAyT4oBgHgl3EQfX_cd/content/tmp_files/load_file.txt b/itAyT4oBgHgl3EQfX_cd/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..b2f485f4c5029a9904d83a9f50fe36baeb87a9cf --- /dev/null +++ b/itAyT4oBgHgl3EQfX_cd/content/tmp_files/load_file.txt @@ -0,0 +1,1010 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf,len=1009 +page_content='Under consideration for publication in J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Fluid Mech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 1 Banner appropriate to article type will appear here in typeset article Slumping regime in lock-release turbidity currents Cyril Gadal1†, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Mercier1 and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Lacaze1 1Institut de Mécanique des Fluides de Toulouse (IMFT), Université de Toulouse, CNRS, Toulouse, France (Received xx;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' revised xx;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' accepted xx) Most gravitational currents occur on sloping topographies, often in the presence of particles that can settle during the current propagation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Yet, an exhaustive exploration of associated parameters in experimental devices is still lacking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Here, we present an extensive experimental investigation on the slumping regime of turbidity (particle-laden) currents in two lock- release (dam-break) systems with inclined bottoms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' We identify 3 regimes controlled by the ratio between settling and current inertia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (i) For negligible settling, the turbidity current morphodynamics correspond to those of saline homogeneous gravity currents, in terms of velocity, slumping (constant-velocity) regime duration and current morphology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (ii) For intermediate settling, the slumping regime duration decreases to become fully controlled by a particle settling characteristic time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (iii) When settling overcomes the current initial inertia, the slumping (constant-velocity) regime is not detected anymore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In the first two regimes, the current velocity increases with the bottom slope, of about 35 % between 0◦ and 15◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Finally, our experiments show that the current propagates during the slumping regime with the same shape in the frame of the moving front.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Strikingly, the current head (first 10 centimeters behind the nose) is found to be independent of all experimental parameters covered in the present study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' We also quantify water entrainment coefficients 𝐸, and compare them with previous literature, hence finding 𝐸 ∝ R𝑒.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Key words: gravity currents, particle/fluid flow 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Introduction Turbidity currents are gravity driven flows induced by the presence of suspended particles, in addition to other processes that may affect the density, such as temperature, salinity or humidity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' They occur ubiquitously in nature, from submarine turbidites to powder snow avalanches and volcanic pyroclastic flows, and are almost always sources of potential natural hazards (Dobran et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 1994;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Stethem et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2003;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Carter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Clare et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2020, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' These currents have been studied extensively along with purely density–driven gravity currents during almost a century, by means of experiments (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='g Simpson & Britter 1980;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Rastello et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2002;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Dai 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Lippert & Woods 2020), theoretical analyses (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='g Benjamin 1968;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Huppert 1998;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Hogg & Woods 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ungarish 2009) and numerical simulations (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='g Meiburg et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Cantero et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2007, 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ottolenghi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Among these † Email address for correspondence: cyril.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='gadal@imft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='fr Abstract must not spill onto p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='00192v1 [physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='flu-dyn] 31 Dec 2022 2 studies, a major source of interest has been to predict the front velocity 𝑢𝑐 of the current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Dimensionally, a current of height ℎ and density difference Δ𝜌 with respect to the ambient density 𝜌 would have a front velocity scaling as 𝑢𝑐 ∝ √︁ 𝑔′ℎ, (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1) where 𝑔′ = 𝑔(Δ𝜌/𝜌).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Many works have been devoted to capture the exact value of the proportionality factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The pioneer work of Von Kármán (1940) leads to √ 2 in the case of steady unbounded flows, then extended to account for finite flow depth (Benjamin 1968;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Rottman & Simpson 1983;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ungarish & Zemach 2005), energy conservation/dissipation (Shin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Borden & Meiburg 2013) or non-boussinesq density difference (Ungarish 2007, 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Konopliv et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Gravity currents can be generated by a constant source of buoyancy (Britter & Linden 1980;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Baines 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Cenedese & Adduce 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Lippert & Woods 2020, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' ), or can result from the instantaneous release of a limited volume of buoyant fluid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In the latter case, dam-break (or lock-exchange) systems are a common set-up to study the features of the resulting currents (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Simpson 1972;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Huppert & Simpson 1980;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Rottman & Simpson 1983;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Bonnecaze et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 1993;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ungarish & Zemach 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ungarish 2007, 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Chowdhury & Testik 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Khodkar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Balasubramanian & Zhong 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Maggi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The heavier (or lighter) fluid is kept separated from the ambient by a locked gate, which is suddenly opened to generate the current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' For high Reynolds number flows, the front velocity of the resulting currents can evolve through different stages (Huppert & Simpson 1980).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' First is a regime of constant velocity, called slumping regime, as the current gains inertia thanks to the suspended column collapse, and which lasts about 5 – 15 lock length depending on the geometry (Rottman & Simpson 1983;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ungarish & Zemach 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ungarish 2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' If inertia dominates the flow, the receding rarefaction wave during column slumping hits the backwall and reflects towards the current nose, modifying its velocity into a so-called inertial regime, where the front position evolves as 𝑡2/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The current eventually enters regimes dominated by either viscosity (Huppert & Simpson 1980), friction and entrainment (Bonnecaze & Lister 1999;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Hogg & Woods 2001) or particle settling velocity (Bonnecaze et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 1993, 1995;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Hallworth et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 1998;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Huppert 1998;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Hogg et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2000;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Harris et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In lock-release systems, these previous studies have focused on the impact of the settling velocity 𝑢s on the long-term dynamics of the current, typically during the inertial regime, and after.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This results into low values of a Rouse number characterizing the ability of a fluid flow to delay gravity settling of the particles, and defined here as P = 𝑢s/𝑢c , typically smaller than 10−2 in the experiments of Bonnecaze et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (1993).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Likewise, theoretical studies have also restrained to asymptotically small P values allowing for analytical development in shallow water limit particularly (Hogg et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2000;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Harris et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Recently, the study of Ikeda & Testik (2021) noted increasing deviations of particle-ladden from saline currents in all propagation regimes as the Rouse number increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note that literature on constant inflow turbidity currents have also quantified significant discrepancies, especially concerning the volume occupied by the current (Bonnecaze & Lister 1999;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Lippert & Woods 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Lock-release homogeneous and turbidity gravity currents on an inclined plane, which induces an extra-driving force due to the weight of the current, have also been studied in the literature (Beghin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 1981;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Rastello et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2002;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Séon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Birman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2007;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Maxworthy & Nokes 2007;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Dai 2013, 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Steenhauer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In the latter case, most studies reported that the current dynamics starts by an acceleration phase, followed by a deceleration phase corresponding to the inertial regime, in which the current dilution by water entrainment becomes increasingly important on large slopes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' However, at the author’s knowledge, only the studies of Maxworthy & Nokes (2007) and Birman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 3 (2007) quantified the impact of the bottom slope on the initial velocity in the slumping regime, for which the velocity is found to increase with the slope, and rather linearly in the range [0◦, 20◦].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In this work, we present lock-release experiments of turbidity currents, where we sys- tematically vary the initial volume fraction, the bottom slope and the particle diameter (and thus the settling velocity), thus extending previous works to a larger range of these control parameters in a unique experimental device.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' We particularly focus on the slumping regime, for which we map its existence, and quantify its duration as well as the related current morphodynamics (velocity, shape) and water entrainment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' All along the paper, we also focus on rationalizing existing results with those obtained in the present study into a relevant parameter map characterizing the flow regimes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Methods 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Experimental set-ups In this study, most of the experiments are done using the dam-break experimental set-up sketched in figure 1(a), later referred to as set-up 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The tank, 150–cm long (𝐿0 + 𝐿1), and 20–cm wide (𝑊0), is filled with water.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The tank is divided in two parts by a sluice gate at 10 cm (𝐿0) from the left side of the tank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' It forms a reservoir on the left side of the tank in which we prepare an initial volume of particle suspension 𝑉0 ≃ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='9 L by strongly steering a known mass of particles 𝑚0 within water.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Finally, the tank is inclinable at various angles up to 7◦, and we keep the water height at the gate position constant equal to 20 cm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The resulting variation of the initial volume 𝑉0 is accounting for, however small compared to the experimental uncertainties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' At the beginning of the experiments, the stirring is stopped, and the sluice gate opened almost entirely, up to ≈ 1 cm below the water surface to limit as much as possible the generation of surface waves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The slumping of the column and the resulting turbidity current are followed by a camera while using a backlight as light source (see figure 1(c–h)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In order to further explore the influence of the bottom inclination, an other experimental set-up is used (set-up 2, see figure 1(b)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Here, the tank can be further inclined thanks to the presence of a rigid lid covering the water surface, keeping the water height to 50 cm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Here, 𝐿0 = 10 cm, 𝐿1 = 320 cm and 𝑊0 = 10 cm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note that in this experimental setup, the suspension was not filling the entire reservoir height, but around 50 %–75 % of that height.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Nevertheless, the suspension was checked to be homogeneously suspended up to its maximum height, and the associated initial volume of suspension 𝑉0 was extracted from images priori opening the gate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Finally, in this set-up, the current is illuminated from the top, and not using backlighting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Parameter space and relevant non-dimensional quantities Most experiments are done with silica sand grains of diameter 𝑑 ≃ 120 𝜇m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' For these particles, the tank inclination 𝜃 is varied from 0◦ to 7◦ in set-up 1, and from 7◦ to 15◦ in set-up 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Then, in set-up 1 and for 𝜃 = 7◦, the particle settling velocity 𝑢s is varied by using glass beads (©Silibeads) of mean diameter ranging from 60 𝜇m to 250 𝜇m, corresponding to 𝑢s ∈ [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='3, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2] cm s−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The particle properties are detailed in appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' For all bottom slopes and settling velocities, the initial volume fraction is varied in the range 𝜙 ∈ [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='25, 30] %.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The corresponding excess density of the fluid/particle suspension with respect to the ambient water, Δ𝜌 = 𝜌0 − 𝜌f (where 𝜌0 = 𝜌f + (𝜌p − 𝜌f)𝜙), therefore varied between 3 kg m−3 and 600 kg m−3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Finally, we also performed experiments with homogeneous saline (without particles) gravity current in set-up 1, in order to make direct comparison 4 θ L1 L0 (a) θ L1 L0 (b) t = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 s (c) t = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 s (d) t = 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 s (e) t = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 s (f) t = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 s (g) t = 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 s (h) Figure 1: (a) Sketch of the experimental set-up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (b–g) Snapshots of experiments using the silica sand (𝑑 ∼ 120 𝜇m, 𝑢s = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='74 cm s−1), 𝜃 = 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2◦ and for initial volume fractions of (b–d) 𝜙 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='87 % and (e–g) 𝜙 = 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4 % .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The orange lines show the extracted current contours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' between turbidity current and homogeneous currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' For that purpose the density in the reservoir was varied from 1002 kg m−3 to 1250 kg m−3 to explore the same range of Δ𝜌- values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This resulted in a total of 169 experimental runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Each experiment is characterized by 3 initial quantities, the bottom slope 𝜃, the volume fraction 𝜙 or equivalently the excess density Δ𝜌 as will be discussed later, and the particle settling velocity 𝑢s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' For saline homogeneous case, only slope 𝜃 and excess density Δ𝜌 then characterize the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note that the initial aspect ratio of the reservoir 𝑎 = ℎ0/𝐿0 is kept nearly constant in each set-up, equal to 2 in set-up 1 and ≃ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='67 in set-up 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Its influence will be discussed along the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Following available literature, we define a velocity scale: 𝑢0 = √︁ 𝑔′ℎ0, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1) where ℎ0 = 𝑉0/(𝐿0𝑊0) is the average initial heavy fluid height, and 𝑔′ = 𝑔Δ𝜌/𝜌f the reduced gravity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note in the case of turbidity currents, it also writes 𝑔′ = 𝑔(𝜌p − 𝜌f)𝜙/𝜌f, where 𝜌p and 𝜌f are the particle and fluid densities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This velocity scale can be used to define the Reynolds number R𝑒 and the Rouse number P, as the control dimensionless parameters based on the initial conditions: R𝑒 = 𝑢0ℎ0 𝜈 , P = 𝑢s 𝑢0 , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2) where 𝜈 is the water kinematic viscosity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In our experiments, we then have R𝑒 ∈ [2 104, 4 105], and P ∈ [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='004, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note that in lock-release systems, 𝑢0 is the only velocity scale associated with the gravity current, such that the initial Froude number reduces to unity for all experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' On the other hand, we define a Froude number as the dimensionless current velocity in the slumping regime: F𝑟 = 𝑢c 𝑢0 , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='3) where 𝑢c is the current velocity in the slumping (constant-velocity) regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Focus on Fluids articles must not exceed this page length 5 0 10 20 30 Time, t [s] 0 25 50 75 100 125 Nose position, x [cm] (a) φ [%] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='6 10−1 100 101 Volume fraction, φ [%] 101 102 Excess density, ∆ρ [kg m−3] 10−1 Current velocity, uc [cm/s] (b) us [cm/s] Saline 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='32 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='74 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='25 ∆ρ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5, φ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5 104 105 Re = u0h0/ν 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='50 Fr = uc/u0 (c) θ [◦] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='9 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 0 5 10 15 Bottom slope, θ (◦) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='6 ⟨Fr⟩ (d) Figure 2: (a) Current nose position as a function of time for various initial volume fractions and for a bottom slope 𝜃 = 7◦ and 𝑢s = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='74 cm s−1 (for clarity purposes, not all experiments are shown here).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The black dashed lines are linear fits on the constant velocity regime, whose slopes give the current velocity 𝑢c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The gray dashed line indicates the end of the tank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (b) Current velocity 𝑢c as a function of the excess density and the volume fraction, for a bottom slope 𝜃 = 7◦ and different settling velocities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (c) Current Froude number as a function of the initial Reynolds number for two different bottom slopes for 𝑢s = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='74 cm s−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (d) Current Froude number averaged over the initial volume fraction as a function of the bottom slope for 𝑢s = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='74 cm s−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Circles corresponds to set-up 1 and empty squares to set-up 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Orange diamonds to the data of Maxworthy & Nokes (2007) for homogeneous saline currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The black dashed lines are fits of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4) to the three previous datasets, leading to (𝐹𝑟0, 𝐶) equal to (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='36, 3), (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='26, 6) and (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='42, 3), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Current dynamics during the slumping regime In this section, we focus on the current dynamics and shape during the slumping regime, and explore the effect of bottom slope and particle settling velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Nose position and velocity 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Slumping behavior First, we start by tracking the current front position, displayed as a function of time in figure 2(a) for different initial volume fraction 𝜙 (or equivalently different Δ𝜌) and 𝜃 = 7◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' After a short acceleration stage, corresponding to the early collapse of the heavy fluid column dominated by vertical motion, all experiments exhibit a regime where the current propagates at a constant front velocity 𝑢𝑐 (dashed black lines in figure 2(a)), also known as slumping regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 6 In this regime, the measured velocity comes from the lossless conversion of the initial potential energy of the heavy fluid column, Δ𝜌𝑔ℎ0, into horizontal kinetic energy, (1/2)𝜌0𝑢2 c, leading to: 𝑢c ∝ √︄ Δ𝜌 𝜌0 𝑔ℎ0, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1) where 𝜌0 is the initial heavy fluid density.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The prefactor of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1) is notably proportional to √︁ 𝜌0/𝜌 𝑓 (Von Kármán 1940;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Benjamin 1968;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Shin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2004), leading to 𝑢c ∝ 𝑢0 (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2) As a shown in figure 2(b), the current velocity indeed scales as Δ𝜌1/2 (or equivalently 𝜙1/2), as expected from (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This also corresponds to a constant Froude number F𝑟 = 𝑢c/𝑢0 as shown in figure 2(c) (dark blue symbols for 𝜃 = 7◦).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Varying the particle settling velocity while keeping the bottom slope at 7◦ neither impact the scaling of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2), or its prefactor (see figure 2(b)), which remain within 20 % of the one corresponding to homogeneous saline gravity currents (red symbols).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' To conclude, the slumping regime is characterized by a time-independent velocity of the front evolution, as usually obtained, whose value remains, more surprisingly, nearly independent of R𝑒 and P in the range of parameters considered here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Effect of the bottom slope On the other hand, changing the bottom slope more clearly affects the current velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' As shown in figure 2(c), currents on a slope of 7◦ are nearly 30% faster than those on 𝜃 = 1◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' After averaging across the initial volume fraction 𝜙, for which no clear dependency is observed as explained in the previous section, we find that the Froude number ⟨F𝑟⟩ increases rather linearly with the bottom slope 𝜃 (see figure 2(d)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The slope of this linear relationship is recovered in the experimental set-up 2, although the measured ⟨F𝑟⟩ are generally lower.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' It is also recovered in the study of Maxworthy & Nokes (2007), with this time generally higher Froude numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This increase of the Froude number with the bottom slope is not necessarily obvious.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Indeed, the time-independent velocity in the slumping regime results from a balance between inertia and pressure gradient, which shall not lead to an increase with slope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' On the other hand, the slope adds a constant forcing term which could be associated with an accelerated flow unless friction balances this extra force.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' However, the full balance of these different terms hardly leads to a constant velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Yet, it is clearly observed here experimentally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' One can then understand this slope trend by considering energetic balances and assuming this velocity to be constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' During the slumping of the column, the current kinetic energy results from the initial potential energy, but also from the work of the alongslope component of gravity and friction during this phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This leads to the following balance: 1 2 𝜌0𝑢2 c = 𝐴Δ𝜌𝑔 cos 𝜃ℎ0 + 𝐵Δ𝜌𝑔 sin 𝜃𝐿 − 1 2𝐶𝑑𝜌0𝑢2 c 𝐿 ℎ0 , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='3) assuming some inertial scale stress and where 𝐿 is characteristic distance over which the current moves during this early phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 𝐴 and 𝐵 and 𝐶𝑑 are scale constants accounting for unknown and local details of the slumping behavior in the bulk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Hence, F𝑟(𝜃) ≡ 𝑢c (Δ𝜌/𝜌0)𝑔ℎ0 = √ 2𝛼 1 + 𝐶𝑑𝐿/ℎ0 √ cos 𝜃 + 𝐶 sin 𝜃 ≡ F𝑟0 √ cos 𝜃 + 𝐶 sin 𝜃, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4) where 𝐶 = (𝐴/𝐵)(𝐿/ℎ0) and F𝑟0 = F𝑟(𝜃 = 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' For small slopes (𝜃 → 0◦), (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4) can be approximated by a linear relationship in 𝜃, as described before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 7 The fit of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4) to the data of set-up 1, set-up 2, and Maxworthy & Nokes (2007), are shown in figure 2(d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Also (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4) is able to represent well the data, the fits are poorly constrained due the small number of experimental points, or the large dispersion in the dataset of Maxworthy & Nokes (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This is especially true for the parameter 𝐶, whose uncertainty is about 100 %, although found to be of the same order of magnitude in all datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note that this order of magnitude implies that the linearized version of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4) could be used up to 𝜃 ∼ 1◦, hence justifying here the use of the non-linearized form of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Dedicated experiments would be required to further study in details the slumping of the column, and its dependence on the geometry of the experiments, by changing the lock aspect ratio 𝑎 for instance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The resulting values of the Froude number for 𝜃 = 0◦, much better constrained, highlight the general discrepancies between the different datasets, which could come from geometrical differences between the corresponding experimental set-ups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In the experimental set-up of Maxworthy & Nokes (2007), the heavy fluid is released way below the water surface (partial depth release), whereas only full-depth released were performed in set-up 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' According to the predictions of Ungarish & Zemach (2005), this can lead to an increase of the velocity of almost 50 %, which matches well the discrepancy between the two study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In set-up 2, the ratio between typical current heights and the tank width is about 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5, compared to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5 in set-up 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' We therefore expect energy dissipation induced by friction at the walls of the tank to be much larger, hence explaining the lower measured Froude numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In any case, the measured values are smaller than F𝑟 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5 predicted on a non-inclined tank by the simple steady model of Benjamin (1968).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' It is also the case for shallow water models models including the properties of dam-breaks configurations (Ungarish & Zemach 2005), also including non-boussinesq effects (Ungarish 2007) or the motion of the light fluid in the upper layer (Ungarish 2011) lead to even larger predicted Froude numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' To conclude, the conceptual model (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4) allows to recover the 𝜃-dependency obtained in the present configuration, extending available results from the literature mostly focusing on zero-slope configuration to predict the front velocity of the current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Clearly, such approach does not allow to discriminate the influence of the particle settling compared to the situation of a homogeneous saline current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This will be discussed in the following section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Existence and duration of the slumping regime The previous section discussed the value of the current velocity during the constant velocity (slumping) regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' However, the latter could not be detected in every experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In the following, we discuss its duration and existence with respect to the settling velocity 𝑢s and excess density Δ𝜌, while keeping the bottom slope at 7◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Existence of the constant velocity regime Figure 3(a) shows the influence of the settling velocity on the nose propagation of currents at a fixed Δ𝜌 = 45 g cm−3 (𝜙 = 3 %).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' As discussed in section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1, all curves exhibit the same initial constant velocity at given slope 𝜃, at the exception of the largest settling velocity, for which no clear constant velocity regime can be observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' For all our experiments, we classify the cases with (blue dots) or without (orange squares) a constant velocity regime in figure 3(b), in a (𝑣𝑠,𝑢0) diagram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The cases shown in Figure 3(a) are indicated by a horizontal green rectangle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' It highlights that the loss of a constant velocity regime occurs when particle settling overcomes the current inertia, with a transition occurring approximately at a Rouse number P = 𝑣𝑠/𝑢0 ∼ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='067 (black dashed line in figure 3(b)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Duration of the constant velocity regime The duration of the constant velocity regime, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' the time 𝑡end at which the front evolution is observed to deviate from a linear trend with 𝑡, is shown to decrease as the settling velocity 8 us [cm/s] Saline 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='32 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='74 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='9 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5 0 5 10 15 20 Time, t [s] 0 25 50 75 100 125 Nose position, x [cm] (a) 100 Settling velocity, us [cm/s] 101 102 Velocity scale, √g′h0 [cm/s] (b) 10−1 100 Time scale, L0/u0 [s] 101 End time, tend [s] (c) 10−2 P/a = (L0/h0)(us/u0) 10−1 Rescaled end time, (us/h0)tend (d) Figure 3: (a) Current nose position as a function of time for various particle settling velocities, at a fixed volume fraction, 𝜙 = 3 %.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (b) Regime diagram indicating currents for which the constant velocity regime is detected (blue dots) and those where its not (orange squares).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' See section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1 for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The black dashed line indicates a possible linear regime separation, corresponding to P = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='067.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (c) Ending time of the constant velocity regime as a function of the current characteristic time scale, for various particle settling velocities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (b) Same as (a), but with rescaling both axes by a settling time scale 𝑡s = ℎ0/𝑢s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In both subplots, the black dashed line indicates 𝑡end = 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5𝐿0/𝑢0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' On (b), the vertical dotted line indicates the limit P/𝑎 ≃ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='033 (see figure 3(b) and section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2), and the horizontal dash-dotted line the limit 𝑡end = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5𝑡s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In this figure, experiments are in set-up 1 (𝑎 = 2) for 𝜃 = 7◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' increases (see figure 3(a)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In the case of homogeneous saline gravity currents, this duration is about 𝑡end ≃ 30𝐿0/𝑢0 as shown in figure 3(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The latter result is in agreement with previous experiments and shallow water modeling, and corresponds to the duration needed for the bore (current nose of the upper light fluid layer) to reach the nose of the heavy fluid current (Rottman & Simpson 1983;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ungarish & Zemach 2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note that previous studies have reported prefactor of 𝑡end ∝ 𝐿0/𝑢0 between 20 and 30, which corresponds to a travel distance of 7 to 12 reservoir lengths, (Rottman & Simpson 1983;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ungarish & Zemach 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Chowdhury & Testik 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Nogueira et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Sher & Woods 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ottolenghi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The difference may essentially results from the difficulty in measuring 𝑡end (Rottman & Simpson 1983;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ungarish 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ottolenghi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' For the smallest glass beads (𝑢s = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='32 cm s−1), figure 3(c) shows that they behave similarly to the saline gravity currents, except for slow currents (low volume fractions, high 𝐿0/𝑢0) 9 that exhibits smaller 𝑡end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' As the settling velocity increases, an increasing number of cases do not follow this trend, more likely for large values of 𝐿0/𝑢0, until all currents exhibits smaller 𝑡end values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' By using 𝑡s = ℎ0/𝑢s as a characteristic settling time, which corresponds to the time required for a particle to settle over the initial column height, we obtain a good collapse of the data at various settling velocity (see figure 3(d)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The resulting trend, whose horizontal axis is now controlled by the ratio between the Rouse number P and the reservoir aspect ratio 𝑎 = ℎ0/𝐿0, exhibits a transition between two regimes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' For small P/𝑎, the settling is negligible and 𝑡end scales with 𝐿0/𝑢0, as for saline density currents (black dashed line).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' For P/𝑎 larger than 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='01, settling and current inertia gradually become of the same order of magnitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The curve then transition to a regime entirely controlled by particle settling, 𝑡end ≃ 𝑡s (dash-dotted line).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The trend stops at P/𝑎 ≃ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='033, the limit over which the constant velocity regime is not observed anymore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The data presented in figure 3 comes from experiments performed in set-up 1, for which 𝑎 = 2 is kept constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' As such, it does not allow to asses the relevance of 𝑎 on the variation of the slumping regime duration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' However, by comparing data from set-ups 1 and 2 (different 𝑎) for the same particles (thus same 𝑢s), one can observe that a better collapse is obtained after rescaling by the settling time ℎ0/𝑢s (see appendix figure 8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This highlights the relevance of the lock-aspect ratio 𝑎 in the control of the constant-velocity regime duration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Finally, it has to be noted that no dependence of 𝑡end with the bottom slope is found in the range of parameters covered here (see appendix figure 8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The latter result is not necessarily obvious, as the current velocity in the slumping regime was shown to depend on 𝜃 in the previous section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' However, slope is actually a second order effect here, as explained in the previous section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In conclusion, the duration of the slumping regime is obtained to depend mainly on P/𝑎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Current morphology during the slumping regime In this section, we focus on the current morphology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' During the constant velocity regime, the current shape is found to be defined by an average shape in the frame of the current nose (blue and orange curves in figure 4(a, b)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Fluctuations around this average profile can be quantified by the standard deviation as shown in figure 4(d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Shape morphometrics The quantitative characterization of the current shape has always been a challenge in the literature, aiming, for example, at the extraction of a current characteristic height.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' When velocity or density/concentration profiles are accessible, studies have used a height weighted by buoyancy (Shin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Marino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Cantero et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2007;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Sher & Woods 2015) or kinetic energy profiles (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Islam & Imran 2010;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Stagnaro & Bolla Pittaluga 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' When a single contour is available, the height of the trailing current behind the head have been widely used, providing it is well defined (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Simpson & Britter 1980;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Bonnecaze et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 1993;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Lowe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Chowdhury & Testik 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' As shown in figure 4(c), the shape of the observed currents spans from a single head (low volume fractions) to a continuous current with no distinguishable head lobe (highest volume fractions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The same qualitative variation is observed between low and high settling velocities, or for saline homogeneous currents between low and high excess density (not shown here).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In order to account for all these morphologies, we use the following approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' First, we fit the theoretical shape of a steady current calculated by Benjamin (1968), to which we add a free vertical shift to account for the nose (foremost point of the head) height induced by 10 0 20 40 60 Distance behind current nose, x [cm] 0 10 Height, h [cm] hb (a) 0 1 2 3 4 Distance behind current nose, x [cm] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 Height, h [cm] hh hn (b) 0 5 10 15 Mean height, ⟨h⟩t [cm] Vh Vt (c) Volume fraction, φ [%] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='49 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='87 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='02 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='17 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='02 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='97 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='02 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='42 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='04 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='55 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='08 0 20 40 60 80 100 120 Distance behind current nose, x [cm] 0 1 2 σh [cm] (d) Figure 4: (a) Current shape for an experiment with an initial volume fraction 𝜙 = 3 %.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' blue lines: all shapes during the constant velocity regime superimposed with transparency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' orange line: temporal average shape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' red dashed line: fit of Benjamin’s current shape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' green dashed line: fit of logarithmic shape (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (b) Zoom of (a) on the first centimeters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The gray rectangle indicate the camera pixel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (c) Average shapes during the constant velocity regime for various initial volume fractions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (d) Standard deviation corresponding to the shapes in (c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In (c) and (d), the black dashed lines separates the current head from its body.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Not all experiments are shown for sake of clarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In this figure, grains are silica sand with 𝑑 ∼ 120 𝜇m, and the bottom slope is 𝜃 = 7◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' bottom friction (red dashed lines in figure 4(a, b)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This allows us to extract a current height ℎb, as well as the current nose height ℎn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' While Benjamin’s shape account for the large scale behavior of the current’s head, it does not reproduce well the curvature close to the nose (dashed red line on figure 4(c)), and therefore leads to poor estimations of ℎn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' However, we noticed that close to the nose, the current head is well approximated by a portion of a logarithm: ℎ(𝑥) = ℎh log �𝑥 + 𝛿 𝑥c � , (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1) where 𝛿 is a shift parameter, found to be almost constant for all currents, and therefore fixed to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4 cm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Here, ℎh gives a characteristic head height representing its geometry, and ℎ(0) ≡ ℎn the nose height.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Finally, we also noticed that the average current shape can be split in two parts (fig- ure 4(c,d)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Close to the nose, the head presents little variation during the current propagation (figure 4(d)), and is also rather invariant with respect to the volume fraction, but also to the bottom slope and the settling velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' On the contrary, the tail presents largest temporal fluctuations induced by shear instabilities, and its morphology largely depends on the volume Rapids articles must not exceed this page length 11 θ [◦] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='9 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='8 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 us [cm/s] Saline 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='32 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='74 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='9 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5 104 105 Re = u0h0/ν 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='6 hb/h0 (a) 104 105 Re = u0h0/ν 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='15 hh/h0 (b) 104 105 Re = u0h0/ν 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='10 hn/h0 (c) 104 105 Re = u0h0/ν 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='3 Vh/V0 (d) Figure 5: Average shape properties as a function of the bulk Reynolds number for various bottom slopes and settling velocities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (a) Current height (b) Head height (c) Nose height (d) Current head volume.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' fraction, and the settling velocity (see section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 for further discussion).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Accordingly, the transition between head and tail is defined as change in the standard deviation, found to increase beyond 10 cm behind the current nose (black dash line in figure 4(d)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The volume of the current head (per unit of width), 𝑉h, is then calculated on the corresponding 10 cm behind the current nose (black dash line in figure 4(c)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The volume of the tail (per unit of width), 𝑉t, is calculated as the total volume minus the head volume 𝑉h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Results The characteristic quantities ℎb, ℎh, ℎn, 𝑉h and 𝑉t are shown in figure 5 figure 6 for all experimental runs that exhibit a constant velocity regime (see section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' As main observation, all parameters linked to the current head morphology (ℎb, ℎh, ℎn, 𝑉h) are found to be independent of all experimental parameters, excess density (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' R𝑒), settling velocity (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' P) and bottom slope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' On the other hand, only the volume of the tail, 𝑉t, is found to increase with the excess density (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' R𝑒), and decrease with the settling velocity (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' P).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 12 105 Re = u0h0/ν 100 Vt/V0 (a) Vt/V0 ∝ Re biased points 10−2 10−1 P = us/u0 100 Vt/V0 (b) Vt/V0 ∝ P−1 Figure 6: Average current tail volume as a function of the bulk Reynolds number for various bottom slopes and settling velocities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Intermediate slopes are not shown to better highlight the effect of the settling velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' On (a), biased points, typically for R𝑒 > 105, represent runs for which we have little confidence on the tail volume due to the lock opening (see section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 and appendix A for further discussion).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' They are removed on (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Legend for the colors is the same as in figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Current height As shown in figure 5(a), the average current height ℎb is ≃ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4 ℎ0, in agreement with previous studies (Shin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Sher & Woods 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' All experimental points also lies within the predictions of Benjamin and single layer shallow water models, ℎb = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5 ℎ0, and two-layer shallow water models, ℎb = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='35 ℎ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (Benjamin 1968;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ungarish 2007, 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note that a slight decrease can be observed for large excess densities, corresponding to large volume fractions and Reynolds numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This could result from non-Boussinesq effects, although they are predicted to be insignificant in the density ratios range of our experiments by shallow water models (Ungarish 2007, 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Likewise, we obtain average nose and head heights of ℎh ≃ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='13 ℎ0 and ℎn ≃ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='04 ℎ0 (figure 5(b, c)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note that here, ℎn/ℎb ≃ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1, similar to previous measurements available in the literature for the same range of Reynolds numbers made on homogeneous density currents (see figure 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='13 of Simpson (1999), and corresponding measurements of Barr (1967) and Keulegan (1957)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Despite the dispersion in our data, it also seems that saline homogeneous currents, and turbidity currents with the smallest settling velocity have in general higher heights than turbidity currents with larger settling velocities (see figure 5(a–c)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This could result from a less dilute interface induced by larger settling velocities, but that would require dedicated experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Finally, we could not see any clear effect of the bottom slope in the studied experimental parameter range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Current volume While the current head volume is constant, 𝑉h ≃ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='25𝑉0 (see figure 5(d)), the tail volume increases with the Reynolds number (see figure 6(a)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' For saline currents and the smallest settling velocity, this increase is rather linear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' However, increasing the settling velocity, leads to smaller values of 𝑉t/𝑉0, which may correspond to a different slope of this relationship, and/or to a different power law.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' A good collapse is obtained by plotting the experimental data as a function of the Rouse number, for which we find 𝑉t/𝑉0 ∝ P−1 (see figure 6(b))).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The volume increase cannot be driven solely by the Rouse number, as this would imply that 13 saline gravity currents, for which P = 0, would have a constant 𝑉t/𝑉0 value, which is not the case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This means that 𝑉t/𝑉0 depends on both R𝑒 and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' While most currents have 𝑉t/𝑉0 ⩾ 1, suggesting the presence of water entrainment, currents for P > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='06 have 𝑉t/𝑉0 ⩽ 1, suggesting the dominance of particle settling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Then, the dependence of 𝑉t/𝑉0 ⩾ 1 with (R𝑒, P) has to be related with entrainment, which is discussed in section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Water entrainment 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Parametrization and hypotheses Here, we consider a fixed observation window starting at the lock gate and ending at the end of the illuminated area (see figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In this zone, the continuity equation for the current volume 𝑉 (per unit of width) can be written as: d𝑉 d𝑡 = 𝑄𝑒 − 𝑄𝑠 + 𝑄in, (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1) where 𝑄𝑒 and 𝑄𝑠 are fluxes induced by water entrainment and particle settling respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' As the observation window does not take into account what is inside the lock, an input flux 𝑄in must be taken into account as long as part of the suspension is transferred from the lock to the current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The entrainment flux can then be written as the quantity of water passing through the interfacial line between the current and the ambient, Γ, at a velocity 𝑤𝑒: 𝑄𝑒 = 𝑤𝑒Γ, (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2) where 𝑤𝑒 = 𝐸𝑢c is the entrainment velocity (Jacobson & Testik 2014) and 𝐸 the entrainment coefficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' As shown in figure 7(a), the evolution of the current volume through time can be split into 3 phases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Just after the lock opens, the volume increases due to the inflow at the upstream boundary (lock gate) induced by the column collapse (phase 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' After the tank is emptied, only entrainment and settling remain, during which the current volume increase becomes slower (phase 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' As the current increases its volume (entrainment) and looses some particules (settling), it dilutes up to a point where it gradually passes below the detection threshold chosen for the contour extraction (see figure 1(c–h)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' At this point, the current volume starts to decrease, and (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1) is not applicable anymore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The ubiquitous presence of settling, combined to the observed diversity of cases, makes it difficult to distinguish whether volume increase is solely due to entrainment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Therefore, we decide to compute a bulk entrainment parameter, by considering the volume difference between the maximum volume observed and the initial volume in the reservoir.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' We assume that, at this moment, the reservoir has completely emptied, and that the current velocity is still large enough to neglect settling processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Following previous studies (Cenedese & Adduce 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Nogueira et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Jacobson & Testik 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Wilson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2017), the entrainment coefficient therefore reads: 𝐸 = 1 𝑢c d𝑉 d𝑡 1 Γ = d𝑉 d𝑥 1 Γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='3) Following the mentioned literature, the interfacial length Γ is also taken at the time when the current volume reaches its maximum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Bulk entrainment coefficient The resulting entrainment coefficients are shown in figure 7(b) as a function of the Reynolds number, along with data from previous studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Disregarding biased points (see below, and 14 appendix A), an increasing trend with the Reynold number is visible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' On the other hand, no clear impact of the bottom slope or the settling velocity (or here equivalently particle size) is found in the studied range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note that for Reynolds numbers in the range 104–3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='104, volume variations induced by entrainment and by fluctuations during the propagation are on the same order of magnitude, leading to large uncertainties in the estimation of the entrainment coefficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' At large Reynolds number (typically R𝑒 > 105), entrainment saturates to a constant value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' However, we attribute this to bias induced when releasing the initial reservoir.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' For the corresponding runs, the release velocity is too small compared to the current velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This results in the mixing of a significant portion of the reservoir with the ambient fluid brought back by the overlying backflow (see appendix A for further description).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' As such, these currents are still fed by an input flux, even though the slumping regime, controlled by the current head properties (formed before the opening induced mixing occurs), is over.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This results in a constant measured maximum volume, which corresponds to the product of the observation zone length and the current height.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Nevertheless, as shown by figure 7(b), our data agrees well with previous experimental studies on both saline (Nogueira et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ottolenghi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Balasubramanian & Zhong 2018) and turbidity currents (Jacobson & Testik 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Wilson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2017), suggesting a dominant correlation between entrainment and R𝑒.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Surprisingly, Wilson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (2017) found constant entrainment values matching the saturation induced by the bias of our data at R𝑒 > 105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note that the data of Balasubramanian & Zhong (2018) have been obtained by a direct method based on buoyancy fluxes, which further validates the entrainment parametrization used in this and other studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Despite the dispersion within each datasets, we however find slightly larger entrainment coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note that the absolute value of our results depends on the chosen threshold for the current contour extraction, which can lead to a volume variations corresponding to a vertical downward shift of 𝐸 ≃ 10−2 of the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Conclusion In the present study, we investigate the slumping regime, characterized as the front constant- velocity regime, of lock-release turbidity currents using an experimental approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In particular, we explore systematically the influence of volume fraction, bottom slope and particle settling velocity, which remains relatively sparse and scattered in the existing literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' We define the associated independent dimensionless parameters as the Reynolds number R𝑒, the Rouse number P and the slope 𝜃.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Direct comparison is also made with saline homogeneous gravity currents for which P ≡ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' First, we focus on the nose dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' We show that, in the explored parameter range, saline and turbidity currents exhibit a constant velocity regime, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' a slumping regime, providing a Rouse number larger than P ≲ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='02, independently of the bottom slope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' During this regime, the non-dimensional velocity scales as √︁ 𝑔′ℎ0 as expected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The corresponding current Froude numbers are systematically smaller than those predicted by Benjamin (1968), but also from shallow water models including the properties of lock-release systems (Ungarish 2007, 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Previous measurements on saline and turbidity currents have reported Froude numbers in better agreement with these theories (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Shin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Lowe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Nogueira et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Sher & Woods 2015), but also smaller ones in agreement with those measured in this study (Longo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Balasubramanian & Zhong 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' More interestingly, the current Froude numbers are found to increase with the bottom slope 𝜃, of about 35 % between 0◦ and 15◦, while being independent on both R𝑒 and P in the range of parameters considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Surprisingly, a similar increase of the maximum front velocity with the bottom slope has been observed in the literature in the case of granular collapses (Mangeney et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2010;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 15 0 20 40 60 80 Nose Position [cm] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4 Current volume, V/V0 1: Qin + Qe − Qs 2: Qe − Qs 3: ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (a) door opening slumping regime equation (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1) 104 105 Reynolds number, R0 = ρ0 � g′ 0H0H0/η 10−3 10−2 Entrainment coefficient, E (b) E ∝ R0 biased points Saline currents Balasubramanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2018 Ottolenghi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2016 Nogueira et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2014 Turbidity currents Wilson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2017 Jacobson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2014 Figure 7: (a) Current volume as a function of its nose position for 𝜃 = 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2◦, 𝑢s = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='32 cm s−1 and 𝜙 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='24 %.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' (b) Water entrainment coefficient as a function of the bulk Reynolds number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Biased points, typically for R𝑒 > 105, represent runs for which we have little confidence on the calculated entrainment due to the lock opening (see section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 and appendix A for further discussion).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Not all errorbars ar shown for sake of clarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The legend of the colored circles is the same as figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Martin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Even if slope dependent, this constant regime is still attributed here to a slumping regime including inertia and pressure gradient, and not to a frictional-buoyancy equilibrium which would be obtained at larger slope and longer times (Britter & Linden 1980).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This regime of constant velocity is therefore not trivial, as the alongslope component of gravity could induce a constant acceleration of the current during its propagation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Even if no theoretical background is available to support the observed constant velocity regime, a relevant energetic balance including the work of alongslope weight and friction during the column slumping is found here to provide the relevant slope effect on the front velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' While the settling velocity, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' the Rouse number P, do not affect the current velocity, it can affect the duration of the slumping (constant velocity) regime, which on the other hand does not depend on the bottom slope 𝜃.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' It is found that the appropriate parameter to study this duration is P/𝑎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' At low P/𝑎 values numbers, typically P/𝑎 ≲ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='01, the current velocity starts to decrease after 𝑡end ≃ 30𝐿0/𝑢0, as for saline currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' However, at larger values, the 16 duration decreases, up to a point where it becomes fully controlled by a settling characteristic time, typically as 𝑡end ≃ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5ℎ0/𝑢s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This last scaling is only valid up to the point where the constant velocity regime is not detected anymore, P/𝑎 ≳ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='033.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Second, we focus on the current morphology during the slumping regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' We show that the current shape is mostly constant in the frame of the current nose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' A striking property of this shape relates to the head (∼ 10 first centimeters), which is found to be independent of all experimental parameters, excess density, bottom slope and settling velocity, and characterized by small fluctuations around the average shape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This further supports the work of Benjamin (1968), that extracts from energy consideration at the head scale a head shape independent of the flow velocity, assuming the absence of friction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Furthermore, above the sublayer induced by bottom friction, the head shape is well approximated by the theoretical shape of Benjamin’s current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Close to the nose, the head is further curved downward, presumably due to the influence of bottom friction, and better approximated by a portion of logarithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' On the other hand, the tail of the current highlights more significant fluctuations around the average shape, suggesting more significant entrainment in this part of the current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Moreover, the volume of the tail increases with the excess density and decreases with the settling velocity, suggesting a strong interplay between entrainment and settling during the slumping regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' We further investigate entrainment by calculating entrainment coefficients from the maximum volume reached during an experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Even if scattered, the resulting entrainment coefficient is shown to increase rather linearly with R𝑒, in agreement with compiled literature data on both turbidity and saline currents, and to remain independent on P and 𝜃.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note that a number of assumptions have been made in the parametrization of the entrainment coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In order to validate these results, and more specifically focus on the influence of the settling velocity, dedicated experiments with more accurate measurements on the density should be carried, similarly to Balasubramanian & Zhong (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This work therefore shows that, as long as P/𝑎 ≲ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='01, the measured current morphody- namics are not impacted by the presence of particles during the slumping regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The current velocity (𝑢c = F𝑟 (𝜃) √︁ 𝑔′ℎ0), height (ℎb ≃ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4ℎ0), regime duration (𝑡end ≃ 30𝐿0/ √︁ 𝑔′ℎ0) and bulk entrainment coefficients (𝐸 ∝ R𝑒) are in agreement with previous measurements on saline density currents (Sher & Woods 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ottolenghi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Balasubramanian & Zhong 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This supports their modeling as an average fluid of different density.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' However, an influence of the slope 𝜃 has been clearly identified here on the dynamics of the current for both saline and turbidity currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In this case, the departure from the slumping regime is to be attributed to the finite reservoir influence, 𝑎-dependent, often reported in gravity current and referred to as the inertial regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' On the other hand, when P/𝑎 ≳ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='01, the current dynamics differs due to the presence of the settling particles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In particular, the regime duration is found to strongly decreases with P/𝑎, while more surprisingly, the front velocity remains unaffected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The duration of the regime results from a complex balance between entrainment and settling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The former has been shown to be only dependent on R𝑒 while the latter is intrinsically quantified by P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The resulting duration depends therefore on both as reported experimentally here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' In this case, the departure from the slumping regime is attributed to particle settling, P-dependent, which in this case would require dedicated modeling, similarly to non-newtonian effects in the fluid rheology for large particle volume fractions (Chowdhury & Testik 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Jacobson & Testik 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Acknowledgements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' We would like to acknowledge the contributors of the open-source python librairies, including Matplotlib (Hunter 2007), Numpy (Harris et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2020) and Scipy (Virtanen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2020), which provide an incredibly efficient ecosystem allowing scientific research in Python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' We also thank Jean- Dominique Barron (IMFT) and Sébastien Cazin (IMFT) for their support in carrying the experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 17 θ [◦], set-up 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='00 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='89 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='20 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='83 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='20 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='98 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='20 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='24 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='20 θ [◦], set-up 2 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='00 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='10 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='00 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='10 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='00 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='10 10−1 100 Time scale, L0/u0 [s] 101 End time, tend [s] (a) 10−2 P/a = (L0/h0)(us/u0) 10−1 Rescaled end time, (us/h0)tend (b) Figure 8: Ending time of the constant velocity regime as a function of the current characteristic time scale, for various bottom slopes in the two experimental set-ups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Here, the settling velocity is 𝑢s = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='74 cm s−1 Funding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' We acknowledge financial support from the French National Research Agency Grants, ANR-19- CE30-0041/PALAGRAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Declaration of interests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The authors report no conflict of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Data availability statement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The data that support the findings of this study are openly available in Zenodo at https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5281/zenodo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='7487190.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Author ORCID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Gadal, https://orcid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='org/0000-0002-2173-5837;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Mercier, https://orcid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='org/0000- 0001-9965-3316;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Lacaze, https://orcid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='org/0000-0002-4945-7445 Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Entrainment induced by the door opening In section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2, we observed that the measured entrainment coefficient saturates to a constant value for R𝑒 > 105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This is attributed to the opening of the lock gate, snapshots of which are shown in figure 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' A first source of entrainment is induced at the beginning of the gate opening.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' As shown by figure 9(a–c), at the beginning, the tank empties as the suspension flows out of the bottom with no opportunity for the ambient fluid to create a counter-current at the top (the locked door is impermeable).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' As soon as the gate has opened higher than the height of the current (figure 9(d, e)), the ambient fluid creates a counter-current just above the turbidity current, thus mixing the ambient fluid with the suspension and refilling the lock.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note that this first mechanism induces a dilution of about ≲ 10 % of the suspension behind the lock (inferred from the lock volume to be filled on figure 9(c)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' A second source of entrainment occurs when the suspension column begins to collapse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' As shown by figure 9(f–h), the column collapse begins at the level of the bottom of the door, not properly at the top of the lock.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This creates an intrusion of ambient fluid inside the lock surrounded at the top and bottom by the suspension (figure 9(g)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' This unstable situation is quickly resolved by the collapse of the upper part of the suspension, which mixes with the ambient fluid below (figure 9(h, i)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The result, at the end of the gate opening, is a full lock 18 (a) (b) air (c) counter current (d) (e) (f) counter current (g) (h) (i) (j) Figure 9: Close up on the opening of the door for an experiment made with silica sand (𝑑 ∼ 120𝜇m, 𝑢s = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='74 cm s−1) for 𝜙 ∼ 8 %.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' of suspension at a smaller volume fraction than the initial one, although a large volume of suspension has already been released into the turbidity current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' For these runs, the reservoir therefore becomes much larger than its initial volume, causing the resulting currents to fill the entire length of the tank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The corresponding maximum current volumes are therefore constant, corresponding approximately to ℎh𝐿1, and so are the corresponding entrainment coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note that the current head, which controls the current dynamics during the slumping regime, has the appropriate initial volume fraction since it forms before the second mechanism occurs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Grain properties B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Grain size distributions The particle size distributions are obtained by taking pictures of the grains using a microscope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The resulting images are segmented using the CellPose algorithm (Pachitariu & Stringer 2022), leading to a collection of planar shapes for each particle type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' For each shape, three different diameters are calculated: an average diameter assuming a circular shape, and the major and minor axis of the ellipse that has the same normalized second central moments as the selected shape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The resulting distribution are shown in figure 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' For the glass beads, all three diameters exhibits similar distributions with matching modes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' For the silica sand, the average diameter is between the minor and major axes of the corresponding ellipse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note that the measurements for the Silibeads 200–300 𝜇m are lacking, due to problems with the microscope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' However, for the glass beads, the measured distributions are in fairly good agreement with the range given by the manufacturer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Therefore, for the Silibeads 200–300 𝜇m, we take 𝑑 = 250 𝜇m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Settling velocity The particle settling velocity is calculated from the equilibrium between buoyancy, 𝑓g = 1 6𝜋𝜌p − 𝜌f)𝑔𝑑3, (B 1) 19 50 100 150 Diameter, d [µm] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='00 Counts ×103 (a) d = 64 ± 13 µm average major axis minor axis 100 200 300 400 Diameter, d [µm] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='5 Counts ×104 (b) d = 116 ± 19 µm average major axis minor axis 0 100 200 300 400 Diameter, d [µm] 0 2 4 Counts ×102 (c) d = 135 ± 32 µm average major axis minor axis 0 100 200 300 400 500 Diameter, d [µm] 0 2 4 6 8 Counts ×102 (d) d = 187 ± 49 µm average major axis minor axis Figure 10: Grain size distributions for the particles used in the paper: (a) Silibeads 40–70 𝜇m, (b) Sand 120 𝜇m (c) Silibeads 100–200 𝜇m (d) Silibeads 150–250 𝜇m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The plain lines are fits of log normal distributions, and the modal value of the average diameter distribution is shown at the upper right of each subplot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' and the drag force: 𝑓d = 1 8 𝜌f𝑢2 s 𝜋𝑑2𝐶D, (B 2) where 𝐶D is a the drag coefficient, function of the particle Reynolds number Rp = 𝑢s𝑑/𝜈 and therefore of the setting velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Various forms of the drag coefficient can be found in the literature (van der Hoef et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Here, we follow the approach of Camenen (2007) by writing the drag coefficient under the form: 𝐶D = �� 𝐴 Rp �1/𝑚 + 𝐵1/𝑚 �𝑚 , (B 3) where 𝐴 and 𝐵 are two constants that depends on the particle shape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Balancing the two forces therefore leads to the following expression of the settling velocity: 𝜈 𝑑 𝑢s = ������ √︄ 1 4 � 𝐴 𝐵 �2/𝑚 + �4 3 𝑑3∗ 𝐵 �1/𝑚 − 1 2 � 𝐴 𝐵 �1/𝑚������ 𝑚 , (B 4) where 𝑑∗ = ((𝑠−1)𝑔/𝜈2)2/3𝑑 is a dimensionless particle diameter, and 𝑠 = 𝜌p/𝜌f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Following the empirical calibration by Camenen (2007), we use 𝐴 = 24, 𝐵 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4 and 𝑚 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='92, which corresponds to spherical particles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' To check the calculated settling velocities, we use a simple experimental set-up in which we put the particles in suspension in a fluid column by strongly stirring, and then follow the front of the suspension as the particle sediments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' As shown by figure 11, the calculated settling velocities matches the experimental ones for dilute enough volume fractions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' However, the measured settling velocity decreases with the volume fraction as previously observed in the 20 10−2 10−1 Volume fraction, φ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='4 Non-dimensional settling velocity, v/us (1 − φ)3 Sand 120 µm Silibeads 100–200 µm Silibeads 150–250 µm Silibeads 200–300 µm Silibeads 40–70 µm Figure 11: Non-dimensional measured particle velocity as a function of the particle volume fraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note that the errorbars essentially come from the uncertainty in the calculation of 𝑢s from (B 4), inherited from the parameter uncertainties (grain size, water viscosity, densities).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' literature (Richardson & Zaki 1954).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Note that the observed decrease is faster than the typical correction in (1 − 𝜙)1/3 proposed by Richardson & Zaki (1954), especially at low volume fractions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' According to Di Felice (1995), the Richardson & Zacki regime is only reached for volume fractions larger than 10 %.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' For more dilute suspensions, the decrease of the settling velocity with 𝜙 is stronger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Thus, we leave out this complex dependence on particle volume fraction, and restrict ourselves to the settling velocities calculated using (B 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' REFERENCES Baines, Peter G 2001 Mixing in flows down gentle slopes into stratified environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 443, 237–270.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Balasubramanian, Sridhar & Zhong, Qiang 2018 Entrainment and mixing in lock-exchange gravity currents using simultaneous velocity-density measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Physics of Fluids 30 (5), 056601.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Barr, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 1967 Densimetric exchange flow in rectangular channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' La Houille Blanche (6), 619–632.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Beghin, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Hopfinger, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' & Britter, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 1981 Gravitational convection from instantaneous sources on inclined boundaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 107 (-1), 407–422.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Benjamin, T Brooke 1968 Gravity currents and related phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of fluid mechanics 31 (2), 209–248.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Birman, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Battandier, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Meiburg, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' & Linden, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2007 Lock-exchange flows in sloping channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 577, 53–77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Bonnecaze, Roger T, Hallworth, Mark A, Huppert, Herbert E & Lister, John R 1995 Axisymmetric particle-driven gravity currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 294, 93–121.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Bonnecaze, Roger T, Huppert, Herbert E & Lister, John R 1993 Particle-driven gravity currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 250, 339–369.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Bonnecaze, Roger T & Lister, John R 1999 Particle-driven gravity currents down planar slopes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 390, 75–91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 21 Borden, Zachary & Meiburg, Eckart 2013 Circulation based models for boussinesq gravity currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Physics of Fluids 25 (10), 101301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Britter, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' & Linden, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 1980 The motion of the front of a gravity current travelling down an incline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 99 (3), 531–543.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Camenen, Benoît 2007 Simple and general formula for the settling velocity of particles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Hydraulic Engineering 133 (2), 229–233.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Cantero, Mariano I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Lee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Balachandar, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' & Garcia, Marcelo H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2007 On the front velocity of gravity currents, , vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 586.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Cantero, Mariano I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Shringarpure, Mrugesh & Balachandar, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2012 Towards a universal criteria for turbulence suppression in dilute turbidity currents with non-cohesive sediments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Geophysical Research Letters 39 (14), 1–5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Carter, Lionel, Gavey, Rachel, Talling, Peter J & Liu, James T 2014 Insights into submarine geohazards from breaks in subsea telecommunication cables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Oceanography 27 (2), 58–67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Cenedese, Claudia & Adduce, Claudia 2008 Mixing in a density-driven current flowing down a slope in a rotating fluid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 604, 369–388.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Chowdhury, MR & Testik, FY 2011 Laboratory testing of mathematical models for high-concentration fluid mud turbidity currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ocean Engineering 38 (1), 256–270.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Clare, Michael, Lintern, D Gwyn, Rosenberger, Kurt, Hughes Clarke, John E, Paull, Charles, Gwiazda, Roberto, Cartigny, Matthieu JB, Talling, Peter J, Perara, Daniel, Xu, Jingping & others 2020 Lessons learned from the monitoring of turbidity currents and guidance for future platform designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Geological Society, London, Special Publications 500 (1), 605–634.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Dai, Albert 2013 Experiments on gravity currents propagating on different bottom slopes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 731, 117–141.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Dai, Albert 2014 Non-Boussinesq gravity currents propagating on different bottom slopes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 741, 658–680.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Di Felice, Renzo 1995 Hydrodynamics of liquid fluidisation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Chemical engineering science 50 (8), 1213– 1245.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Dobran, Flavio, Neri, Augusto & Todesco, Micol 1994 Assessing the pyroclastic flow hazard at vesuvius.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Nature 367 (6463), 551–554.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Hallworth, Mark A, Hogg, Andrew J & Huppert, Herbert E 1998 Effects of external flow on compositional and particle gravity currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 359, 109–142.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Harris, Charles R, Millman, K Jarrod, van der Walt, Stéfan J, Gommers, Ralf, Virtanen, Pauli, Cournapeau, David, Wieser, Eric, Taylor, Julian, Berg, Sebastian, Smith, Nathaniel J & others 2020 Array programming with numpy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Nature 585, 357–362.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Harris, Thomas C, Hogg, Andrew J & Huppert, Herbert E 2001 A mathematical framework for the analysis of particle–driven gravity currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Proceedings of the Royal Society of London.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Series A: Mathematical, Physical and Engineering Sciences 457 (2009), 1241–1272.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' van der Hoef, Martin Anton, van Sint Annaland, M, Deen, NG & Kuipers, JAM 2008 Numerical simulation of dense gas-solid fluidized beds: a multiscale modeling strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Annu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Fluid Mech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 40, 47–70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Hogg, Andrew J, Ungarish, Marius & Huppert, Herbert E 2000 Particle-driven gravity currents: asymptotic and box model solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' European Journal of Mechanics-B/Fluids 19 (1), 139–165.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Hogg, Andrew J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' & Woods, Andrew W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2001 The transition from inertia-to bottom-drag-dominated motion of turbulent gravity currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 449, 201–224.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Hunter, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2007 Matplotlib: A 2d graphics environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Computing in science & engineering 9, 90–95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Huppert, Herbert E 1998 Quantitative modelling of granular suspension flows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Philosophical Transactions of the Royal Society of London.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Series A: Mathematical, Physical and Engineering Sciences 356 (1747), 2471–2496.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Huppert, Herbert E & Simpson, John E 1980 The slumping of gravity currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 99 (4), 785–799.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ikeda, Jin & Testik, Firat Y 2021 Propagation, deposition, and suspension characteristics of constant- volume particle-driven gravity currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Environmental Fluid Mechanics 21 (1), 177–208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Islam, M Ashraful & Imran, Jasim 2010 Vertical structure of continuous release saline and turbidity currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Geophysical Research: Oceans 115 (C8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Jacobson, MR & Testik, FY 2014 Turbulent entrainment into fluid mud gravity currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Environmental Fluid Mechanics 14 (2), 541–563.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 22 Keulegan, GH 1957 An experimental study of the motion of saline water from locks into fresh water channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Bur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Stand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Rept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Technical Report 5168.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Khodkar, MA, Nasr-Azadani, MM & Meiburg, E 2017 Partial-depth lock-release flows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Physical Review Fluids 2 (6), 064802.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Konopliv, NA, Smith, Stefan G Llewellyn, McElwaine, JN & Meiburg, E 2016 Modelling gravity currents without an energy closure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 789, 806–829.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Lippert, Martin C & Woods, Andrew W 2020 Experiments on the sedimentation front in steady particle- driven gravity currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 889.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Longo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Ungarish, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Di Federico, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Chiapponi, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' & Petrolo, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2018 Gravity currents produced by lock-release: Theory and experiments concerning the effect of a free top in non-Boussinesq systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Advances in Water Resources 121 (July), 456–471.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Lowe, Ryan J, Rottman, James W & Linden, PF 2005 The non-boussinesq lock-exchange problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' part 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' theory and experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 537, 101–124.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Maggi, Maria Rita, Adduce, Claudia & Negretti, Maria Eletta 2022 Lock-release gravity currents propagating over roughness elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Environmental Fluid Mechanics pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 1–20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Mangeney, A, Roche, Olivier, Hungr, O, Mangold, N, Faccanoni, Gloria & Lucas, A 2010 Erosion and mobility in granular collapse over sloping beds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Geophysical Research: Earth Surface 115 (F3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Marino, BM, Thomas, LP & Linden, PF 2005 The front condition for gravity currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 536, 49–78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Martin, Nathan, Ionescu, IR, Mangeney, Anne, Bouchut, François & Farin, Maxime 2017 Continuum viscoplastic simulation of a granular column collapse on large slopes: 𝜇 (i) rheology and lateral wall effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Physics of Fluids 29 (1), 013301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Maxworthy, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' & Nokes, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2007 Experiments on gravity currents propagating down slopes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Part 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' The release of a fixed volume of heavy fluid from an enclosed lock into an open channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 584, 433–453.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Meiburg, Eckart, Blanchette, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Strauss, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Kneller, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Glinsky, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Necker, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Härtel, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' & Kleiser, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2005 High resolution simulations of particle-driven gravity currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' American Society of Mechanical Engineers, Fluids Engineering Division (Publication) FED 261 FED, 381–390.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Nogueira, Helena IS, Adduce, Claudia, Alves, Elsa & Franca, Mário J 2014 Dynamics of the head of gravity currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Environmental Fluid Mechanics 14 (2), 519–540.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ottolenghi, Luisa, Adduce, Claudia, Inghilesi, Roberto, Armenio, Vincenzo & Roman, Federico 2016 Entrainment and mixing in unsteady gravity currents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Hydraulic Research 54 (5), 541–557.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Pachitariu, Marius & Stringer, Carsen 2022 Cellpose 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0: how to train your own model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Nature methods 19 (12), 1634–1641.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Rastello, M, Ancey, C, Ousset, F, Magnard, R & Hopfinger, E J 2002 An experimental study of particle-driven gravity currents on steep slopes with entrainment of particles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Natural Hazards and Earth System Sciences 2 (3-4), 181–185.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Richardson, JF & Zaki, WN 1954 The sedimentation of a suspension of uniform spheres under conditions of viscous flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Chemical Engineering Science 3 (2), 65–73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Rottman, James W & Simpson, John E 1983 Gravity currents produced by instantaneous releases of a heavy fluid in a rectangular channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 135, 95–110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Sher, Diana & Woods, Andrew W 2015 Gravity currents: entrainment, stratification and self-similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 784, 130–162.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Shin, JO, Dalziel, SB & Linden, PF 2004 Gravity currents produced by lock exchange.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 521, 1–34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Simpson, JE & Britter, RE 1980 Experiments on the dynamics of the front of a gravity current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Fluid Mech 88, 223–240.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Simpson, John E 1972 Effects of the lower boundary on the head of a gravity current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 53 (4), 759–768.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Simpson, John E 1999 Gravity currents: In the environment and the laboratory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Cambridge university press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Stagnaro, M & Bolla Pittaluga, Michele 2014 Velocity and concentration profiles of saline and turbidity currents flowing in a straight channel under quasi-uniform conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Earth Surface Dynamics 2 (1), 167–180.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Steenhauer, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Tokyay, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' & Constantinescu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2017 Dynamics and structure of planar gravity currents propagating down an inclined surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Physics of Fluids 29 (3), 036604.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 23 Stethem, Chris, Jamieson, Bruce, Schaerer, Peter, Liverman, David, Germain, Daniel & Walker, Simon 2003 Snow avalanche hazard in canada–a review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Natural Hazards 28 (2), 487–515.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Séon, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Hulin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Salin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=', Perrin, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' & Hinch, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' 2005 Buoyancy driven miscible front dynamics in tilted tubes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Physics of Fluids 17 (3), 031702.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ungarish, Marius 2007 A shallow-water model for high-reynolds-number gravity currents for a wide range of density differences and fractional depths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of Fluid Mechanics 579, 373–382.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ungarish, Marius 2009 An introduction to gravity currents and intrusions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Chapman and Hall/CRC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ungarish, Marius 2011 Two-layer shallow-water dam-break solutions for non-boussinesq gravity currents in a wide range of fractional depth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Journal of fluid mechanics 675, 27–59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Ungarish, M & Zemach, T 2005 On the slumping of high reynolds number gravity currents in two- dimensional and axisymmetric configurations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' European Journal of Mechanics-B/Fluids 24 (1), 71–90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Virtanen, Pauli, Gommers, Ralf, Oliphant, Travis E, Haberland, Matt, Reddy, Tyler, Cournapeau, David, Burovski, Evgeni, Peterson, Pearu, Weckesser, Warren, Bright, Jonathan & others 2020 Scipy 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content='0: fundamental algorithms for scientific computing in python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Nature methods 17, 261–272.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Von Kármán, Theodore 1940 The engineer grapples with nonlinear problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Bulletin of the American Mathematical Society 46 (8), 615–683.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Wilson, Richard I, Friedrich, Heide & Stevens, Craig 2017 Turbulent entrainment in sediment-laden flows interacting with an obstacle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} +page_content=' Physics of Fluids 29 (3), 036603.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/itAyT4oBgHgl3EQfX_cd/content/2301.00192v1.pdf'} diff --git a/j9AzT4oBgHgl3EQf5P4V/content/tmp_files/2301.01855v1.pdf.txt b/j9AzT4oBgHgl3EQf5P4V/content/tmp_files/2301.01855v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..50ff0aba81ae8c97b44ebad94fae519dc63c1817 --- /dev/null +++ b/j9AzT4oBgHgl3EQf5P4V/content/tmp_files/2301.01855v1.pdf.txt @@ -0,0 +1,1262 @@ +Weak Deflection Angle, Hawking Radiation and Greybody Bound of Reissner-Nordstr¨om Black +Hole Corrected by Bounce Parameter +Wajiha Javed,1, ∗ Mehak Atique,1, † Reggie C. Pantig,2, ‡ and Ali ¨Ovg¨un3, § +1Department of Mathematics, Division of Science and Technology, University of Education, Lahore-54590, Pakistan +2Physics Department, Map´ua University, 658 Muralla St., Intramuros, Manila 1002, Philippines +3Physics Department, Eastern Mediterranean University, +Famagusta, 99628 North Cyprus via Mersin 10, Turkey. +(Dated: January 6, 2023) +In this study, we probe the weak lensing by a Reissner–Nordstr¨om black hole corrected by bounce parameter in +plasma and dark matter mediums. For this, the optical geometry and the Gibbons–Werner approach are utilized +to obtain the bending angle in the weak field limitations. We examine that the impact of these mediums increases +the black hole’s bending angle. In addition, we graphically study the deflection angle of light with respect to +the impact parameter and examine that the bounce parameter directly affects the angle. Further, we compute +the Hawking radiation via a topological method involving two invariants and verify our obtained result with the +standard method of calculating the Hawking temperature. In addition, we compute the greybody factor’s bound +of the black hole. Moreover, we analyze the bound graphically and observe that the bound shows convergent +behavior. We also study that our attained results reduce the results of the Reissner–Nordstr¨om and Schwarzschild +black holes by reducing the parameters. Finally, we probe how the bounce parameter affected the shadow radius +and compared it to the shadow produced if the black hole is immersed in plasma. It is revealed that the rate at +which the shadow radius changes with respect to r easily tends to zero under the effect of the bounce parameter, +while the plasma merely increases the shadow radius. +PACS numbers: 95.30.Sf, 98.62.Sb, 97.60.Lf +Keywords: General Relativity; Bending Angle; Gauss-Bonnet Theorem; Plasma Medium; Black Hole; Greybody; Hawking +Temperature +I. +INTRODUCTION +General relativity (GR) is the theory of gravity proposed by Einstein in 1916. In GR, Einstein gave the idea of black holes +(BHs) [1]. Acknowledging Newton’s corpuscular theory of light, which assumed that photons are ultra-light particles, geologist +Michell proposed the presence of dark stars. Today, these dark stars are known as BHs. Black holes are fascinating astronomical +objects with a gravitational attraction so powerful that nothing can escape them, not even light. According to the no-hair theorem, +all astrophysical BHs are fully defined by their masses and spins. A BH has two main components: the singularity and the event +horizon. The question which attains the most attention in GR is about the inner structure of a BH. However, because of the +presence of the spacetime singularity, where the curvature deviates continuously, and GR breaks down, this question cannot be +answered easily. The singularity theorems, presented by Penrose and Hawking [2], state that gravitational collapse with physically +valid circumstances always results in the formation of a singularity. Even in some scenarios (such as a cosmological constant +in the spacetime area), the singularity theorem’s assumptions may not be applied. Black holes having regular centers or, in +other words, having no singularity are called regular BHs or non-singular BHs. The first regular BH with horizons and no core +singularity [3] was proposed by Bardeen [4]. Near the origin, the Bardeen BH behaves like a de Sitter spacetime, however, for +r → ∞, it acts like a Schwarzschild BH [3]. Later, Ayon-Beato and Garcia [5] showed that Bardeen’s model is an accurate +solution of GR connected to non-linear electrodynamics. There has been significant progress in the study and application of +regular BHs [6–8], as well as regular rotating black holes [8, 9]. Most of these solutions are based on Bardeen’s concept, which +uses non-linear electrodynamics as a source. +According to Hawking [10], a BH can emit heat radiation by taking into account the quantum consequences, and Hawking +radiation is a term used to describe this type of heat radiation. The production and annihilation of particles are theoretically +feasible in the context of quantum field theory. When pair creation occurs near the BH’s horizon, one of the particles from the pair +falls back to the BH while the other particles depart from the BH’s horizon. The particles that exit are observed by an outside +observer as Hawking radiation [11–13]. According to GR, the spacetime bend by a BH acts as a gravitational potential inside +which the particles move. Some Hawking radiation is returned to the BH while the remainder passes through the potential at +infinity. In this aspect, the transmission probability is known as the greybody factor. Several methods for obtaining Hawking +∗ wajiha.javed@ue.edu.pk +† mehakatique1997@gmail.com +‡ rcpantig@mapua.edu.ph +§ ali.ovgun@emu.edu.tr +arXiv:2301.01855v1 [gr-qc] 5 Jan 2023 + +2 +radiation have been proposed. Using a topological method, Zhang, Wei, and Liu [14] investigated the Hawking temperature of +the BTZ BH. ¨Ovg¨un et al. [15] computed the Hawking temperature for BHs by applying the topological strategy. Kruglov [16] +explored the Hawking temperature of a magnetically charged BH through surface gravity and horizon in the context of non-linear +electrodynamics. The greybody factor can be calculated in a variety of ways. The matching approach can be used to derive an +estimated greybody factor [17–19]. The WKB approximation may be used to calculate the greybody factor if the gravitational +potential is high enough [20, 21]. The greybody factor may also be calculated using the rigorous bound, which is an alternative to +approximation. The bound can be used to describe a BH qualitatively. Visser [22] inspected some extremely wide reflected and +transmitted coefficient constraints for one-dimensional potential scattering. Boonserm and Visser [23] calculated the greybody +factor’s bound for Schwarzschild BHs by examining the Regge-Wheeler equation for wave phase angular momentum and arbitrary +particle spin. Javed, Hussain, and ¨Ovg¨un [24] worked out the boundaries of the Kazakov SolodukhinBH’s greybody factor. +The gravitational lensing (GL) effect states that a light beam would be distorted while passing by a huge object, which is one of +GR’s most important predictions. For determining the mass of galaxies and clusters [25, 26], as well as discovering dark energy +and dark matter (DM) [27], GL has become one of the most powerful instruments in astronomy and cosmology. Since the first +measurements of the Sun’s gravitational bending of light, the lens equation has been used to examine the GL effects for BHs, +wormholes, cosmic strings, and other objects. Strong GL and weak GL are the types of GL. Strong GL is a GL effect that is +intense enough to generate many pictures, such as arcs or Einstein’s rings. In this type of GL, geometry is favorable and bending +is rather large, whereas the weak GL is a GL impact that is not intense to create multiple pictures, and the geometry is less suitable. +Since the 19th century, various studies on the GL have been done not only for the BHs but also for the wormholes, cosmic strings, +global monopoles, and neutron stars [28–35]. +Gibbons and Werner (GW) presented a technique for calculating the deflection angle in 2008. The Gauss-Bonnet theorem +(GBT) and the optical geometry of the BH’s spacetime, where the source and viewer are in asymptotic areas, were used to develop +their technique. Werner [36] soon expanded this approach to stationary spacetimes. In GBT, one can utilize the domain GS +confined by the light ray as well as a circular boundary curve CS placed at the lens’s center where the light ray intersects at the +source and receiver. The source and receiver are considered at the same coordinate distance S from the lens. The GBT in optical +metric and by using weak field approximation can be expressed as follows +� � +GS +˜KdS + +� +∂GS +kdt + +� +i +ϵi = 2πX(GS), +(1) +wherein ˜K indicates the Gaussian optical curvature, k stands for geodesic curvature, dS is a surface element of optical geometry, +and GS is a region accommodates the light rays source, the referential of the observer and the lens’s center. At ith vertex, ϵi +represents the exterior angles. Just for simplicity, we suppose that as long as radial distance S → ∞, the sum of the external +angles θi becomes π for the observer. The asymptotic deflection angle ˜α can be calculated as: +˜α = − +� π +0 +� ∞ +b +sin(φ) +˜KdS, +(2) +where b represents the impact parameter. The integral of GBT may be solved in an infinite region confined by a ray of light. +Instead of utilizing the asymptotic receiver and source, Ishihara et al. [37, 38] modified this approach for finite distances. The +finite-distances approach was then applied to the axisymmetric spacetimes by Ono et al. [39]. In the plasma medium, the +GBT was used by Crisnejo, and Gallo [40] to determine the gravitational bending of light. Using massive particles and the +Jacobi–Maupertuis Randers–Finsler metric inside GBT, Li et al. [41, 42] investigated the finite-distance impacts on the weak +bending angle. +Beginning from the first startling discoveries by Oort [43] of missing matter in the Galactic disk, which modern observations +have not confirmed, and by Zwicky, [44] the discovery of missing matter in the Coma cluster, much later understood to be “DM”. +Dark matter comprises particles that do not absorb, reflect, or emit light, making it impossible to detect them using electromagnetic +radiation. Only gravitational interactions can detect DM, and we know that DM is non-baryonic, non-relativistic, and possesses +weak non-gravitational interactions. Weakly interacting massive particles (WIMPs), super-WIMPs, axions, and sterile neutrinos +are the four candidates of DM [45]. Dark matter constitutes about 85% of the total mass of the Universe [46] and is used to +explain the strange behavior of stars and galaxy dynamics. In DM medium, Pantig and ¨Ovg¨un [47–51] studied the weak GL by +wormholes and BHs. +The light that passes close to the BH is refracted by the gravitational field, creating the BH’s shadow. The BH’s shadow is a +dark area frequently surrounded by a luminous ring. The BH’s mass and angular momentum determine its size and form. Many +scientists have attempted to predict how the observable appearance of a BH surrounded by bright material would seem before the +spectacular finding of the BH’s shadow produced by Event Horizon Telescope Collaboration [52, 53]. For instance, Bardeen +et al. [54] examined the shadows of Kerr BH’s, while Synge [55] studied the shadows of Schwarzschild spacetime. The bright +accretion disc surrounding the BH was manually drawn by Luminet [56]. In addition, due to the transparent emissions close to +the BH, it is predicted that a BH would display its shadow, which is brought on by gravitational light deflection and photons + +3 +captured at its event horizon. The photon ring, a geometric characteristic of spacetime, determines the shadow radius [57]. To our +knowledge, few studies about RN black hole with bounce parameter have been conducted. For instance, [58] has considered the +photon rings and shadows. In this study, we ought to analyze such a metric under the influence of plasma and dark matter. With +the shadow cast, it can also determine imprints of spacetime, and several studies were also conducted about using the black hole +for dark matter detection [50, 51, 59–68]. +This paper aims to study the weak GL of black bounce Reissner–Nordstr¨om spacetime utilizing optical geometry and GBT in +plasma and DM mediums. Moreover, it would be interesting to calculate the Hawking temperature and greybody bound of the +BH. We will also study the graphical behavior of the deflection angle and greybody bound. We will focus on how the bounce +parameter (introduced in Reissner–Nordstr¨om BH) affects the bending angle, Hawking temperature, and the bound. +The layout of our paper is given as follows. Section II is based on the discussion about the black bounce Reissner–Nordstr¨om +spacetime. In Section III, we obtain the optical metric from a four-dimensional spherically symmetric metric and then compute the +deflection angle of the BH with plasma medium by using GBT, and analyze its graphical behavior. The computation of the bending +angle in the case of DM medium is given in Section IV. Section V, is devoted to investigating the Hawking temperature of black +bounce Reissner–Nordstr¨om BH via GBT. The computation of the greybody factor’s bound of black bounce Reissner–Nordstr¨om +BH, and the graphical behavior of the bound is addressed in Section VI. Finally, Section VII discusses the shadow behavior. The +purpose of Section VIII is to sum up the findings of this research and propose a research direction. Throughout the paper, we used +natural units G = c = 1 and metric signature (−, +, +, +). +II. +BLACK BOUNCE REISSNER-NORDSTR ¨OM SPACETIME +One of the most significant problems is the prediction of the spacetime singularity within a BH or at the start of the universe, +which implies that the GR theory fails there. To address the singularity issue, Bardeen [4] was the first who put up the idea +of regular BHs, which has continuously attracted scientific interest. It is convenient to consider the regular BH due to the +problematic nature of spacetimes singularities. Regular BHs are solutions of the gravity equations that have an event horizon but +no singularities in spacetime. Based on the bounce and quantum corrections, a wide range of regular black hole solutions have +been attained [69–71]. The black bounce spacetime smoothly interpolates among ordinary Schwarzschild BH and Morris–Thorne +traversable wormhole [72]. It is noteworthy to remark that throughout the geometry is regular and one can have a distinct type of +“regular BH”, where the “origin” r = 0 can either be spacelike, null, or timelike, as long as the parameter a ̸= 0. Additionally, +it was demonstrated that the spacetime metric may be utilized to characterize several interesting physical circumstances, such +as a developing black-bounce, a wormhole to black-bounce transfer, and the opposite black-bounce to wormhole transition. In +Reissner–Nordstr¨om BH, a regularizing process was recently proposed [71], which does not produce a standard regular BH [73], +such as the Bardeen or Hayward BHs, rather, it produces a charged regular BH called a black bounce Reissner–Nordstr¨om or +charged black bounce. In a static spherically symmetric spacetime, the line-element for the black bounce Reissner–Nordstr¨om BH +can be described as [74] +ds2 = −f(r)dt2 + dr2 +f(r) + h(r)2(dθ2 + sin2 θdφ2), +(3) +where the metric function f(r) and h(r)2 are defined as +f(r) = 1 − +2m +√ +r2 + a2 + +Q2 +r2 + a2 +and +h(r)2 = r2 + a2. +In the metric function, m stands for the mass of the BH, Q indicates the charge, and a represents the bounce parameter of black +bounce Reissner–Nordstr¨om. Several characteristics of the black bounce family have been thoroughly investigated [75, 76]. Some +properties are that the curvature singularities are absent from the black bounce family on a global scale and satisfy all observable +weak field tests. Based on the values of charge Q and bounce parameter a, one can easily interpolate the Reissner–Nordstr¨om and +Schwarzschild BHs. By taking charge Q ̸= 0 and bounce parameter a = 0, one can obtain the Reissner–Nordstr¨om BH. If we +consider the charge Q = 0 and bounce parameter a = 0, then we can obtain the Schwarzschild BH. The event horizon of the +Reissner–Nordstr¨om BH corrected by bounce parameter can be computed by taking f(r) = 0. +rh = +� +(m + +� +m2 − Q2)2 − a2. +One can observe that a coordinate speed of light may be described in terms of radial null curves (ds2 = dθ = dφ = 0), since +the radial coordinate r ∈ (−∞, +∞) [74]: +C(r) = +���� +dr +dt +���� = f(r) = 1 − +2m +√ +r2 + a2 + +Q2 +r2 + a2 . +(4) + +4 +Thus in this spacetime, a sphere’s area at r (radial coordinate) has the following form A(r) = 4πh(r)2. The wormhole throat +is where the area is minimized, and by observing the state, one may determine where the throat is A′(ro) = 0, where the throats +location is represented by r0. The wormhole throat’s radius is thus given by h0 = h(r0). We now divide this geometry into three +categories [74]: +1. The outer and inner horizon exist at rh for a < (m + +� +m2 − Q2) and | Q |< m. In this instance, ∃ rh ∈ R∗ and c(rh) = 0. +Since light has a zero coordinate speed, it cannot escape the horizon. This geometry indicates a charged regular black hole with +usual outer and inner horizons. +2. One can obtain one extremal horizon rh = 0, when a = (m + +� +m2 − Q2) and | Q |< m, and we know ∃ rh = 0 and +c(rh) = 0. For this case, the geometry represents the extremal charged regular BH, which is the one-way charged traversable +wormhole with single extremal null throat at rh = 0. +3. For the case when, a > (m + +� +m2 − Q2) and whether | Q |< m or | Q |> m , there is no horizons. So, we have ∀ radial +coordinate r ∈ (−∞, +∞) and c(r) ̸= 0. This case represents a two-way charged traversable wormhole and the light can travel +throughout the domain. +III. +PLASMA INFLUENCED DEFLECTION ANGLE +Guo and Miao [74] have calculated the deflection angle by Reissner–Nordstr¨om BH corrected by bounce parameter in non- +plasma medium utilizing GBT. Now, in this section, we see how the presence of a plasma affects the bending of light by black +bounce Reissner–Nordstr¨om BH specified by charge Q and bounce parameter a. In the scenario of plasma medium, the refractive +index for the black bounce Reissner–Nordstr¨om is described as [40]. +n(r) = +� +1 − δ +� +1 − +2m +√ +r2 + a2 + +Q2 +r2 + a2 +� +, +(5) +where, δ = ω2 +e +ω2∞ and the plasma parameters ωe and ω∞ represents electron plasma frequency and photon frequency as marked +by the static investigator at infinity. For static spherically symmetric metric in Equation (3) assuming that the source of light +and viewer are on the equatorial region (θ = π +2 ). Because we are working with null geodesics, we use (ds2 = 0) to find the +appropriate optical metric. +opticalmetric = n2dt2 = gpqdxpdxq = n2(r) +� +1 +(f(r))2 dr2 + h(r)2 +f(r) dφ2 +� +, +(6) +where, p,q ∈ {1, 2}. In order to determine the optical Gaussian curvature ˜K from the optical metric Equation (6), we use the +following expression +˜K = R +2 , +(7) +where R is the Ricci scalar calculated using the optical metric. Utilizing Equation (7) the Gaussian optical curvature of the black +bounce Reissner–Nordstr¨om BH in plasma medium is calculated as +˜K ≃ −2m +r3 + 3Q2 +r4 − 6mQ2 +r5 ++ 5Q2ω2 +e +r4ω2∞ +− 3mω2 +e +r3ω2∞ +− 26mQ2ω2 +e +r5ω2∞ +− 12a2Q2 +r6 +− a2 +r4 + 28a2mQ2 +r7 ++ 10a2m +r5 +− 20a2Q2ω2 +e +r6ω2∞ +− a2ω2 +e +r4ω2∞ ++ 115a2mQ2ω2 +e +r7ω2∞ ++ 31a2mω2 +e +2r5ω2∞ ++ O(m2, a4, Q4). +(8) +The obtained value of Gaussian optical curvature in Equation (8) will be used later to compute the bending angle. Now, to +acquire the deflection angle of the black bounce Reissner–Nordstr¨om BH in a plasma medium, we make use of the GBT, which is +defined as follows [77] +� � +GS +˜KdS + +� +∂GS +kdt + +� +i +ϵi = 2πX(GS). +(9) +As in above equation,k describes the geodesic curvature which is defined as k = g(∇ ˙γ ˙γ, ¨γ) wherein g(˙γ, ˙γ) = 1, ¨γ illustrates +the unit acceleration vector. At ith vertex, ϵi represents the external angle. As S → ∞, the jump angles becomes π +2 so that we + +5 +obtain θo + θS → π. Since GS is a non-singular region, the Euler characteristic X(GS) is equal to 1, and the following result is +obtained +� � +GS +˜KdS + +� +∂GS +kdt + ϵi = 2πX(GS), +(10) +where ϵi = π denotes the total angle of jumps, and since S → 0, the effective element is acquired as +k(ES) =| ∇ ˙ES ˙ES | . +(11) +The geodesic curvature’s radial component is expressed as follows [77]: +(∇ ˙ES ˙ES)r = ˙Eφ +S∂φ ˙Er +S + Γr +φφ( ˙Eφ +S)2. +(12) +For very large S, we obtain +(∇ ˙Er +S ˙Er +S)r → 1 +S . +(13) +It is asserted that the geodesic curvature is independent of topological defects, implying that k(ES) → 1 +S . Utilizing the optical +metric given in Equation (6), one can write dt = Sdφ and have k(ES)dt = dφ. Now, using all the above-obtained results and the +straight line approximation r = +b +sinφ. The bending angle ˜α can be calculated by using the formula below: +˜α = − +� π +0 +� ∞ +b/ sin φ +˜KdS, +(14) +where dS = √detgdrdφ. Using Equation (14) and the value of optical Gaussian curvature Equation (8), the bending angle ˜α of +the black bounce Reissner–Nordstr¨om in plasma medium up to the leading order terms is calculated as +˜α ≃ 4m +b ++ 2mω2 +e +bω2∞ +− 8mQ2 +3b3 +− 3πQ2 +4b2 ++ 2mQ2ω2 +e +b3ω2∞ +− πQ2ω2 +e +2b2ω2∞ +− 8a2m +3b3 ++ a2π +4b2 + 64a2mQ2 +15b5 ++ 27a2πQ2 +32b4 +− 4a2mω2 +e +3b3ω2∞ +− 16a2mQ2ω2 +e +5b5ω2∞ ++ 9a2πQ2ω2 +e +16b4ω2∞ ++ O(m2, a4, Q4). +(15) +The above deflection angle depends on the mass m, charge Q, bounce parameter a, impact parameter b and the plasma +parameters i.e., ωe and ω∞. The terms without the bounce parameter a and which contain charge are due to the charged nature of +the BH. The remaining terms are due to the corrections with the bounce parameter a. For a = 0, one can find the bending angle ˜α +of a Reissner–Nordstr¨om BH in a plasma medium. By neglecting the charge and bounce parameter, the obtained angle ˜α reduces +to the bending angle of the Schwarzschild BH. We also observe that the effect of the plasma increases the deflection angle. The +bending angle is inversely proportional to the photon frequency, so the bending angle increase by lowering the photon frequency +and assuming the electron frequency is fixed. Moreover, one can attain the bending angle in the case of non-plasma medium [74] +if we take ωe = 0 or (δ → 0) in the derived deflection angle in Equation (15). We also observe that the obtained deflection angle +Equation (15) is directly proportional to the mass m, charge Q, bounce parameter a, and inversely proportional to the impact +parameter b. +Graphical Behaviour: Now we look into the graphical behavior of the black bounce Reissner–Nordstr¨om BH’s deflection +angle ˜α relative to the impact parameter b, for the fixed value of mass m and charge Q, while varying bounce parameter a and +plasma term. +For fixed values of mass m and charge Q, ωe +ω∞ = 0.1 and varying the values of bounce parameter a, Figure 1 depicts the graph +of deflection angle ˜α vs impact parameter b. For a ≥ 0, we investigate that at the small values of impact parameter b, one can +obtain the maximum value of the bending angle ˜α. A the value of b increases, the bending angle ˜α exponentially decreases and +approaches zero. It is observed that for the small values of b, one can obtain the positive angle (deflection in the upward direction). +Further, we examine that the bending angle ˜α shows the inverse relationship with the impact parameter b. Moreover, physically +the bending angle ˜α represents the stable behavior. +Figure 2 depicts the behaviour of deflection angle ˜α with respect to the impact parameter b for Q = a = 0.5 and 0 ≤ +ωe +ω∞ ≤ 1. +We examine that the deflection angle ˜α decreases exponentially and almost approaches to zero as the value of impact parameter b +goes to infinity. Furthermore, in this case, bending angle ˜α shows the inverse relation with the impact parameter b. +Figure 3 exhibits the behaviour of deflection angle ˜α with respect to impact parameter b for Q = a = 1 and 0 ≤ +ωe +ω∞ ≤ 1. We +examine that the deflection angle ˜α decreases exponentially and almost approaches to zero as the value of impact parameter b +goes to infinity. In both cases (Q = a = 0.5, 1), angle behavior is stable. It is observed that the for Q = 1 deflection angle ˜α +relative to the impact parameter b by varying bounce parameter a shows that similar behavior as for the Q = 0.5. + +6 +a=0m +a=0.2m +a=0.3m +a=0.4m +a=0.5m +0 +20 +40 +60 +80 +100 +-0.2 +-0.1 +0.0 +0.1 +0.2 +0.3 +0.4 +0.5 +b +m +α˜ +m=1,Q=0.5m +FIG. 1. Bending angle’s variation ˜α as a function of impact parameter b. +IV. +DARK MATTER’S INFLUENCE ON DEFLECTION ANGLE +This section primarily concerns the calculations of black bounce Reissner–Nordstr¨om BH’s deflection angle in DM medium. +The dark-atom concept has been offered as a composite of DM, which we study here using light’s bending phenomenon. Dark +matter possesses electromagnetic interactions due to its frequency-dependent refractive index [78], and this medium has particular +optical characteristics that a traveling photon may detect. The refractive index determines how fast a wave moves through a +medium. In this regard, the refractive index for the black bounce Reissner–Nordstr¨om BH is defined as [78]. +n(ω) = 1 + βA0 + A2ω2. +(16) +The frequency of a photon is represented by ω. Here, it is examined that β = +ρ0 +4m2ω2 , where ρ0 represents the mass density of +dispersed particles of DM, A0 = −2ε2e2 and A2 ≥ 0. The optical Gaussian curvature ˜K of the black bounce Reissner–Nordstr¨om +BH in DM medium up to the leading order terms by using the Equation (7) can be calculated as +˜K ≃ +3Q2 +r4(1 + A2ω2 + A0β)2 − +a2 +r4(1 + A2ω2 + A0β)2 − +12Q2a2 +r6(1 + A2ω2 + A0β)2 +− +2m +r3(1 + A2ω2 + A0β)2 − +6Q2m +r5(1 + A2ω2 + A0β)2 ++ +10a2m +r5(1 + A2ω2 + A0β)2 + +28Q2a2m +r7(1 + A2ω2 + A0β)2 + O(m2, a4, Q4). +(17) +The bending angle ˜α black bounce Reissner–Nordstr¨om BH in DM medium by using Equations (17) and (14) up to the leading +order terms can be computed as +˜α ≃ +4m +b(1 + A2ω2 + A0β)2 − +8a2m +3b3(1 + A2ω2 + A0β)2 + +a2π +4b2(1 + A2ω2 + A0β)2 ++ +64a2mQ2 +15b5(1 + A2ω2 + A0β)2 − +8mQ2 +3b3(1 + A2ω2 + A0β)2 + +27a2πQ2 +32b4(1 + A2ω2 + A0β)2 +− +3πQ2 +4b2(1 + A2ω2 + A0β)2 − +16a2mA2ω2 +3b3(1 + A2ω2 + A0β)2 + +8mA2ω2 +b(1 + A2ω2 + A0β)2 + +7 +ωe +ω∞ +=0 +ωe +ω∞ +=0.2 +ωe +ω∞ +=0.4 +ωe +ω∞ +=0.6 +ωe +ω∞ +=1 +0 +20 +40 +60 +80 +100 +0.1 +0.2 +0.3 +0.4 +0.5 +0.6 +0.7 +b +m +α˜ +m=1,Q=a=0.5m +FIG. 2. Bending angle’s variation ˜α as a function of impact parameter b, for Q = a = 0.5m. ++ +a2πA2ω2 +2b2(1 + A2ω2 + A0β)2 + +128a2mQ2A2ω2 +15b5(1 + A2ω2 + A0β)2 − +16mQ2A2ω2 +3b3(1 + A2ω2 + A0β)2 ++ +27a2πQ2A2ω2 +16b4(1 + A2ω2 + A0β)2 − +3πQ2A2ω2 +2b2(1 + A2ω2 + A0β)2 +(18) ++ O(m2, a4, Q4, A2 +2, ω4). +(19) +The BH’s mass m, charge Q, bounce parameter a, impact parameter b, and DM parameters all are the parameters of the +measured deflection angle in Equation (19). It is to be observed that the photon deflected through the DM around the black bounce +Reissner–Nordstr¨om BH has a large bending angle as compared to the vacuum case [74]. By eliminating the DM effect, the angle +Equation (19) reduces to the bending angle in the case of vacuum. By considering Q ̸= 0 and a = 0 in Equation (19), one can +obtain the expression of the Reissner–Nordstr¨om BH’s deflection angle. We also find that taking charge Q = 0 and a = 0 in +Equation (19) the obtained angle reduces to the Schwarzschild BH’s deflection angle in DM medium. +V. +HAWKING RADIATION +In this part, we use a topological technique based on the GBT and Euler characteristic to derive the Hawking temperature of +a black bounce Reissner–Nordstr¨om BH. To derive the Hawking temperature using the topological approach, one can utilize +the Wick rotation [79] to use the Euclidean geometry of the two-dimensional spacetime without missing any facts from the +four-dimensional spacetime. The spherically static symmetric spacetime of black bounce Reissner–Nordstr¨om BH is defined in +Equation (3). +Rewriting the four-dimensional metric into the two-dimensional coordinates by using the Wick rotation condition i.e., (θ = π +2 ) +and (τ = it) +ds2 = +� +1 − +2m +√ +r2 + a2 + +Q2 +r2 + a2 +� +dτ 2 + +1 +� +1 − +2m +√ +r2+a2 + +Q2 +r2+a2 +�dr2. +(20) +The formula to compute the Hawking temperature TH of black bounce Reissner–Nordstr¨om BH after using all the values of the + +8 +ωe +ω∞ +=0 +ωe +ω∞ +=0.2 +ωe +ω∞ +=0.4 +ωe +ω∞ +=0.6 +ωe +ω∞ +=1 +0 +20 +40 +60 +80 +100 +0.1 +0.2 +0.3 +0.4 +0.5 +0.6 +0.7 +b +m +α˜ +m=1,Q=a=1m +FIG. 3. Bending angle’s variation ˜α as a function of impact parameter b, Q = a = m. +physical constants is defined as [15] +TH = +1 +4πX +� +rh +√gRdr, +(21) +where, g = 1 is the determinant of Equation (20) and rh is the event horizon. Using the values of Ricci scalar R, Euler +characteristic X = 1 and integrating along the event horizon, the Hawking temperature TH of black bounce Reissner–Nordstr¨om +BH is calculated as +TH = +�� +m+√ +m2−Q2 +�2−a2√ +m2−Q2 +2π +� +m+√ +m2−Q2 +�3 +. +(22) +One can observe that the obtained expression of the Hawking temperature TH black bounce Reissner–Nordstr¨om BH depends +on the mass m, charge Q of the BH, and bounce parameter a similarly with [71]. We also notice that the Hawking temperature +via standard technique gives the same expression as the topological technique. For the case Q ̸= 0 and a = 0, the obtained +Hawking temperature in Equation (22) reduces to the Hawking temperature of Reissner–Nordstr¨om BH. Moreover, the attained +Hawking temperature Equation (22) reduces to the Schwarzschild–Hawking temperature, i.e., TH = +1 +8mπ by taking Q = a = 0. +To observe the behavior of Hawking temperature graphically, we plot the graph between Hawking temperature TH and bounce +parameter a in Figure 4. We observe that for Q = 0.5 the Hawking temperature decreases exponentially. +VI. +GREYBODY FACTOR +This section mainly examines the greybody factor bound of the black bounce Reissner–Nordstr¨om BH. Much research has +been dedicated to estimating the greybody factors. There is, however, a distinct analytic approach for obtaining bounds on the +greybody components. The line-element for the Reissner–Nordstr¨om BH corrected by bounce parameter in a static spherically +symmetric spacetime is defined in Equation (3) The lower bounds on transmission probability T can be defined as [22, 80, 81]. +T ≥ sech2 +� 1 +2ω +� ∞ +−∞ +ϱdr∗ +� +, +(23) + +9 +Q=0.5 +0.0 +0.1 +0.2 +0.3 +0.4 +0.5 +0.0385 +0.0390 +0.0395 +a +TH +m=1 +FIG. 4. Hawking Temperature TH vs the bounce parameter a. +where +ϱ = +� +[g′(r∗)]2 + [ω2 − V(r∗) − g2(r∗)]2 +2g(r∗) +. +where r∗ represents the tortoise coordinate, and g is a positive function. For the radial part, the equation of motion is given as +1 +h(r)2 +d +dr +� +h(r)2f(r)du(r) +dr +� ++ +� ω2 +f(r) − l(l + 1) +h(r)2 +� +u(r) = 0, +(24) +where u(r) indicates the the scalar or vector field oscillating. Taking dr∗ = +1 +f(r)dr, while the potential is defined as [82] +V(r) = l(l + 1)f(r) +h(r)2 +. +(25) +The lower bounds on the transmission probability T for g = ω are given by +T ≥ sech2 +� 1 +2ω +� ∞ +rh +V(r)dr∗ +� +. +(26) +After substituting in the values of V and dr∗, we obtain the following expression +T ≥ sech2 +� 1 +2ω +� ∞ +rh +l(l + 1) +h(r)2 dr +� +. +(27) +The greybody bound Tb of the black bounce Reissner–Nordstr¨om BH after putting the value of h(r)2 and integrating along rh +is calculated as +Tb = T ≥ sech2 +� +����� +1 +2ω +� +� +� +� +� +� +l(l + 1)π +2a +− +l(l + 1) arctan +� � +(m+√ +m2−Q2)2−a2 +a +� +a +� +� +� +� +� +� +� +����� +(28) +The bound Tb of the black bounce Reissner–Nordstr¨om BH depends upon the mass m, charge Q, and bounce parameter a of the +BH. Guo and Miao [74] have also calculated the greybody factor of perturbation fields of the black bounce Reissner–Nordstr¨om +BH. It is observed from the graphs that when the potential of the black bounce Reissner–Nordstr¨om BH is higher, then the bound +will be lower. + +10 +A. +Graphical Analysis +The purpose of this section is to explain the graphical behavior of greybody bound Tb and the potential of the black bounce +Reissner–Nordstr¨om BH. For this purpose, we take the fixed values of charge Q, angular momentum l = 1, 2, and varying bounce +parameter a. +Figure 5 depicts the graphical behavior of the potential V relative to the r, and greybody factor bound Tb relative to the ω. For +0 < a < 2, the potential V increases, and attains its maximum value. However, as the value of bounce parameter a → 0, the +potential exponentially decreases and approaches zero. It is also observed that as the r → 0, the value of potential is high and +attains its maximum value, while for the large values of r the potential starts decreasing from the maximum value and almost +approaches zero. Nevertheless, as the value of a increases, the corresponding bound becomes lower, making it more difficult for +the waves to pass through the higher potential. However, the bound Tb shows the convergent behavior by converging to 1. +0 +5 +10 +15 +20 +0.00 +0.01 +0.02 +0.03 +0.04 +0.05 +r +V +m=1,Q=0.5 +a +0.50 +0.75 +1.00 +1.25 +1.50 +1.75 +0 +5 +10 +15 +20 +0.94 +0.95 +0.96 +0.97 +0.98 +0.99 +1.00 +ω +Tb +m=1,Q=0.5 +a +0.50 +0.75 +1.00 +1.25 +1.50 +1.75 +FIG. 5. The left panel shows the potential with l = 1 and corresponding bound Tb is shown in right panel. +Figure 6 represents the graphical behavior of the potential V with respect to the r and greybody factor bound T with respect to +the ω. For 0 < a < 2, the bound T shows a similar behavior as for the l = 1. +VII. +SHADOW BEHAVIOR +We now turn our attention to the explore the shadow behavior of the black bounce RN black hole. Let us consider the +Hamiltonian for light rays where a non-magnetized cold plasma with plasma frequency ωp(r) is included [83]: +H = 1 +2 +� +gikpipk + ωp(r)2� += 1 +2 +� +− p2 +t +A(r) + +p2 +r +B(r) + +p2 +φ +C(r) + ωp(r)2 +� +. +(29) +In the equation above, note that C(r) = h(r)2 due to Equation (3). Furthermore, we should also note that A(r) = f(r), and +B(r) = A(r)−1. Without compromising generality, we can also restrict ourselves along the equatorial plane (θ = π/2) due to +spherical symmetry and derive the equations of motions (EoS) through the following +˙xi = ∂H +∂pi +, +˙pi = −∂H +∂xi , +(30) +which reveals two constants of motion: +E = A(r) dt +dλ, +L = h(r)2 dφ +dλ. +(31) + +11 +0 +5 +10 +15 +20 +0.00 +0.01 +0.02 +0.03 +0.04 +0.05 +r +V +m=1,Q=0.5 +a +0.50 +0.75 +1.00 +1.25 +1.50 +1.75 +0 +5 +10 +15 +20 +0.6 +0.7 +0.8 +0.9 +1.0 +ω +Tb +m=1,Q=0.5 +a +0.50 +0.75 +1.00 +1.25 +1.50 +1.75 +FIG. 6. The left panel shows the potential with l = 2 and corresponding bound Tb is shown in right panel. +With the above equation, we can define the impact parameter as +b ≡ L +E = h(r)2 +A(r) +dφ +dt , +(32) +and the condition that ds2 = 0, gives the rate of change of the r-coordinate with respect to the azimuthal angle φ: +� dr +dφ +�2 += h(r)2 +B(r) +�p(r)2 +b2 +− 1 +� +, +(33) +where [83] +p(r)2 = h(r)2 +A(r) n(r)2 = h(r)2 +A(r) +� +1 − ω2 +e +ω2∞ +A(r) +� +(34) +since the non-gravitating plasma is assumed to be non-homogenous. With our metric functions, the condition p′(r) = 0 allows +one to find the photonsphere radius [83], and for our case, +�ω2 +e +ω2 +0 +A(r)2 − A(r) +� +h′(r)2 + h(r)2A′(r) = 0. +(35) +With the inclusion of the plasma parameter, finding the analytical expression for rph can be quite lengthy. However, for the case +where there is no plasma (i.e., n(r) = 1), we simply found the physical solutions as +rph = +� +18M 2 − 8Q2 − 4a2 + 6 +� +9M 4 − 8M 2Q2 +2 +. +(36) +A static observer at a distance robs from the black bounce black hole can obtain the angular radius αsh of the shadow. From the +black hole’s center to robs, simple geometry shows that ∆x = +� +B(r)dr and ∆y = h(r)dφ [83]: +tan(αsh) = lim +∆x→0 +∆y +∆x = h(r) +� +1 +B(r) +�1/2 dφ +dr +���� +r=robs +, +(37) +which can be simplified in terms of the critical impact parameter as +sin2(αsh) = +b2 +crit +p(robs)2 . +(38) + +12 +Here, the bcrit can be obtained using the orbit equation [84]: +b2 +crit = p(rph) +� +2h(rph)2B(rph)p′(rph) − h(rph)2B′(rph)p(rph) + B(rph)h′(rph)2p(rph) +� +B(rph)h′(rph)2 − h(rph)2B′(rph) +(39) +The critical impact parameter’s analytical expression with n(r) is somewhat complicated, but for the case n(r) = 1, we find +b2 +crit = +2h(rph)3 +h(rph) − m. +(40) +Finally, it can be easily shown that, in terms of robs and rph, we obtain the shadow radius as (n(r) = 1) +Rsh = +�2h(rph)3(h(robs)2 − 2mh(robs) + Q2) +h(robs)2(h(rph) − m) +�1/2 +. +(41) +Next, we plotted Equation (41), which is indicated by the dotted lines in Figure 7. We also included in the plot the case where +the black hole bounce is surrounded by plasma (solid lines). For immediate comparison, we also plotted the Schwarzschild and +RN cases. Furthermore, while it is understood that the shadow cast by a non-spinning black hole is a circle, we can see in the +0 +5 +10 +15 +20 +25 +robs/m +1 +0 +1 +2 +3 +4 +5 +6 +Rsh/m +Schw +RN, Q = 0.25m +a = 0.25, += 10 +1 +a = 0.50 +a = 0.75 +a = 1.00 +a = 0.25, += 0 +a = 0.50 +a = 0.75 +a = 1.00 +5.0 +5.5 +3.8 +4.0 +4.2 +FIG. 7. behavior of the shadow radius is due to a static observer with varying location from the black bounce RN BH. Here, we used Q = 0.25m. +plot the behavior of the shadow radius. First, without the bounce parameter, the RN case with Q = 0.25m decreases the shadow +radius while following the general trend of the Schwarzschild case. For the Schwarzschild case, Rsh = 3 +√ +3m as robs becomes +larger, but at robs = 25m, we observe that it is still lower than this value of Rsh. The effect of the bounce parameter is to produce +lower values of Rsh and make its rate of change with respect to r → 0 at lower values of robs. It means that the observer does +not need to go so far away to observe a constant shadow radius. Furthermore, the bounce parameter allows the formation of the +shadow near the event horizon. With the plasma medium δ = 10−1, we observe that robs follows the general trend for δ = 0, but +increases the shadow slightly. Such an increase depends on the value of the plasma parameter. +For completion, let us analyze the effect of the dark matter refractive index n(ω) in Equation (16) instead of the plasma. The +photonsphere can be found via +n(ω)2[h′(r)A(r) − A′(r)h(r)] = 0, +(42) + +13 +which reveals that the photonsphere radius is independent of the dark matter parameter. It is easy to see that through the orbit +equation, one can verify that the critical impact parameter in this case is +b2 +crit = 2n(ω)2h(rph)3 +h(rph) − m . +(43) +Then, the shadow is given by +Rsh = n(ω)h(rph) +�2h(rph)(h(robs)2 − 2mh(robs) + Q2) +h(robs)2(h(rph) − m) +�1/2 +, +(44) +where the shadow radius is increased by a factor of n(ω). +VIII. +CONCLUSION +In this work, we have discussed the Reissner–Nordstr¨om BH corrected by bounce parameter and its properties, i.e., the curvature +singularities are absent from the black bounce family on a global scale and satisfy all observable weak field tests. In the case +of plasma and DM mediums, the attained bending angle Equation (15) depended on the mass m, charge Q of the BH, bounce +parameter a, impact parameter, and medium’s parameters. It is noted that in bending angle Equation (15) the terms without the +bounce parameter a and which contain charge are due to the charged nature of the BH and the remaining terms are due to the +corrections with the bounce parameter a. It is also found that the effect of the plasma increases the deflection angle. The bending +angle is inversely proportional to the photon frequency, so the bending angle increases by lowering the photon frequency and +assuming the electron frequency is fixed. It is to be observed that black bounce Reissner–Nordstr¨om BH’s bending angle increases +due to the DM medium effect compared to the vacuum case. +In above both mediums, we have examined that in the absence of a bounce parameter, one can obtain the bending angle of +Reissner–Nordstr¨om BH and the bending angle of Schwarzschild’s BH by neglecting the charge and bounce parameter of the BH. +Moreover, one can attain the bending angle of a non-plasma medium by taking ωe = 0 in a plasma medium. It is noticed that also +that, by ignoring the DM effect in the angle of deflection in Equation (15), the angle reduces to the angle of a non-plasma medium. +We also observed that the obtained bending angle in both mediums is directly proportional to the mass m, charge Q, bounce +parameter a, and inversely proportional to the impact parameter b. +Following that, we have analyzed the graphical behavior of the bending angle ˜α with respect to the b for fixed values of mass +m, charge Q, ωe +ω∞ = 0.1 and varying the value of bounce parameter a. We have examined that for at the small values of impact +parameter b, the value of the bending angle ˜α is maximum and as the value of b increases, the bending angle ˜α exponentially +decreases and approaches to zero. Moreover, we have investigated the deflection angle ˜α with respect to the impact parameter b +by fixing m, Q, taking a = 0.5, 1 and varying the plasma factor. For Q = a = 0.5, 1, we have studied that the deflection angle ˜α +decreases exponentially and almost approaches zero as the value of impact parameter b goes to infinity. In all the above cases, +graphically we have observed that the deflection angle ˜α shows the inverse relationship with the impact parameter b and also the +behavior of angle is physically stable. +Furthermore, we have computed the Hawking temperature using a topological method involving two invariants, namely the +two-dimensional Euler characteristic and the GBT. We have examined that the obtained expression of the Hawking temperature +TH in Equation (22) black bounce Reissner–Nordstr¨om BH depends on the mass m, charge Q of the BH and bounce parameter a. +We also noticed that the Hawking temperature expression is similar to the topological technique. It is to be mentioned here that for +the case Q ̸= 0 and a = 0, the Hawking temperature Equation (22) reduces to the Hawking temperature of Reissner–Nordstr¨om +BH, and for Q = a = 0, the acquired Hawking temperature converts to the Schwarzschild Hawking temperature, i.e., TH = +1 +8mπ. +Furthermore, we graphically investigated that the Hawking temperature decreases exponentially. +We have also calculated the greybody bound Tb and examined that the bound Tb of the black bounce Reissner–Nordstr¨om BH +depends on the mass m, charge Q of the BH, and bounce parameter a. Moreover, we have observed that the potential V increases +and attains its maximum value for l = 1, 2. As the value of a increases, the potential exponentially decreases and approaches +zero. It is to be mentioned here that when r → 0, one can attain the high value of potential, and for the large value of r the +potential approaches zero. It is observed that the corresponding bound becomes lower as the value of a increases. Furthermore, it +is examined that the greybody factor’s bound exhibits the convergent behavior and converges to 1. We also observed that for large +values of a and small r, the potential is higher, making it difficult for the waves to pass through that potential. +Finally, we also explored the effect of the bounce parameter on the behavior of the shadow radius and when it is surrounded by +plasma. First, the effect of the bounce parameter is to allow shadow formation closer to the black hole shadow and at a larger +radius than the Schwarzschild or RN cases. Here, the rate at which the shadow increases is also larger. Moreover, we observe that +the bounce parameter quickly makes the shadow radius rate of change tend to zero even at low values of robs. Finally, the effect of +the plasma is just to increase the shadow radius of the black hole affected by the bounce parameter. These parameters can indeed +change the shadow radius, which sophisticated astronomical devices can detect. + +14 +ACKNOWLEDGMENTS +A. ¨O. and R. P. would like to acknowledge networking support by the COST Action CA18108 - Quantum gravity phenomenology +in the multi-messenger approach (QG-MM). +[1] Albert Einstein, “Lens-Like Action of a Star by the Deviation of Light in the Gravitational Field,” Science 84, 506–507 (1936). +[2] Roger Penrose, “Gravitational collapse and space-time singularities,” Phys. Rev. Lett. 14, 57–59 (1965). +[3] Stefano Ansoldi, “Spherical black holes with regular center: A Review of existing models including a recent realization with Gaussian +sources,” in Conference on Black Holes and Naked Singularities (2008) arXiv:0802.0330 [gr-qc]. +[4] J. M. Bardeen, “Non-singular general-relativistic gravitational collapse,” Proceedings of GR5 URSS, Tbilisi , 174 (1968). +[5] Eloy Ayon-Beato and Alberto Garca, “New regular black hole solution from nonlinear electrodynamics.” Phys. Lett. B 464, 25–29 (1999). +[6] I. Dymnikova, “Vacuum nonsingular black hole.” Gen Relat Gravit 24, 235242 (1992). +[7] Jos´e P. S. Lemos and Vilson T. Zanchin, “Regular black holes: Electrically charged solutions, reissner-nordstr¨om outside a de sitter core,” +Phys. Rev. D 83, 124005 (2011). +[8] Arun Kumar, Sushant G. Ghosh, and Sunil D. Maharaj, “Nonsingular black hole chemistry,” Physics of the Dark Universe 30, 100634 +(2020). +[9] A. Eichhorn and A. Held, “Image features of spinning regular black holes based on a locality principle.” Eur. Phys. J. C 81, 933 (2021). +[10] S. W. Hawking, “Particle creation by black holes,” Commun. in Math. Phys. 43 (1975), 10.1007/BF02345020. +[11] H. Hassanabadi, W. S. Chung, B. C. L¨utf¨uo˘glu, and E. Maghsoodi, “Effects of a new extended uncertainty principle on Schwarzschild and +Reissner–Nordstr¨om black holes thermodynamics,” Int. J. Mod. Phys. A 36, 2150036 (2021). +[12] S. Hassanabadi, J. Kˇr´ıˇz, W. S. Chung, B. C. L¨utf¨uo˘glu, E. Maghsoodi, and H. Hassanabadi, “Thermodynamics of the Schwarzschild +and Reissner–Nordstr¨om black holes under higher-order generalized uncertainty principle,” Eur. Phys. J. Plus 136, 918 (2021), +arXiv:2110.01363 [gr-qc]. +[13] Hao Chen, Bekir Can L¨utf¨uo˘glu, Hassan Hassanabadi, and Zheng-Wen Long, “Thermodynamics of the Reissner-Nordstr¨om black hole +with quintessence matter on the EGUP framework,” Phys. Lett. B 827, 136994 (2022). +[14] Shao-Wen Wei Zhang, Yu-Peng and Yu-Xiao Liu, “Topological approach to derive the global Hawking temperature of (massive) BTZ +black hole.” Physics Lett. B 810 (2020). +[15] Ali ¨Ovg¨un and Izzat Sakalli, “Hawking radiation via gaussbonnet theorem,” Ann. of Phys. 413, 168071 (2020). +[16] S. I Kruglov, “Magnetically charged black hole in framework of nonlinear electrodynamics model.” Int. J. of Mod. Phys. A 33 (2018). +[17] S. Fernando, “Greybody factors of charged dilaton black holes in 2 + 1 dimensions.” Gen. Relativ. Gravit. 37, 461481 (2005). +[18] Wontae Kim and John J. Oh, “Greybody factor and hawking radiation of charged dilatonic black holes,” J. of the Korean Phys. Society 52, +986–991 (2008). +[19] Jorge Escobedo, “Greybody factors,” Master’s Thesis, Uni. of Amsterdam 6 (2008). +[20] Maulik K. Parikh and Frank Wilczek, “Hawking radiation as tunneling,” Phys. Rev. Lett. 85, 5042–5045 (2000). +[21] Chris H. Fleming, “Hawking radiation as tunneling,” Uni. of Maryland. Dept. of Phys., Tech. Rep (2005). +[22] Matt Visser, “Some general bounds for one-dimensional scattering,” Phys. Rev. A 59, 427–438 (1999). +[23] Petarpa Boonserm and Matt Visser, “Bounding the greybody factors for schwarzschild black holes,” Phys. Rev. D 78, 101502 (2008). +[24] W. Javed, I. Hussain, and A. ¨Ovg¨un, “”Weak deflection angle of KazakovSolodukhin black hole in plasma medium using GaussBonnet +theorem and its greybody bonding.” Eur. Phys. J. Plus 137 (2022). +[25] Hoekstra, Henk and others, “Masses of galaxy clusters from gravitational lensing.” Space Sci. Rev. 177, 75–118 (2013). +[26] Brouwer, M. M. and others, “Studying galaxy troughs and ridges using weak gravitational lensing with the Kilo-Degree Survey.” , Mon. +Not. Roy. Astron. Soc 481, 5189 (2018). +[27] R. Ali Vanderveld, Michael J. Mortonson, Wayne Hu, and Tim Eifler, “Testing dark energy paradigms with weak gravitational lensing,” +Phys. Rev. D 85, 103518 (2012). +[28] C. R. Keeton, C. S. Kochanek, and E. E. Falco, “The optical properties of gravitational lens galaxies as a probe of galaxy structure and +evolution,” The Astrophys. J. 509, 561–578 (1998). +[29] A. Bhadra, “Gravitational lensing by a charged black hole of string theory,” Phys. Rev. D 67, 103009 (2003). +[30] Richard Whisker, “Strong gravitational lensing by braneworld black holes,” Phys. Rev. D 71, 064004 (2005). +[31] Songbai Chen and Jiliang Jing, “Strong field gravitational lensing in the deformed hoˇrava-lifshitz black hole,” Phys. Rev. D 80, 024036 +(2009). +[32] Kamal K. Nandi, Yuan-Zhong Zhang, and Alexander V. Zakharov, “Gravitational lensing by wormholes,” Phys. Rev. D 74, 024020 (2006). +[33] Ernesto F. Eiroa, Gustavo E. Romero, and Diego F. Torres, “Reissner-nordstr¨om black hole lensing,” Phys. Rev. D 66, 024010 (2002). +[34] Yashmitha Kumaran and Ali ¨Ovg¨un, “Weak Deflection Angle of Extended Uncertainty Principle Black Holes,” Chin. Phys. C 44, 025101 +(2020), arXiv:1905.11710 [gr-qc]. +[35] Yashmitha Kumaran and Ali ¨Ovg¨un, “Deflection angle and shadow of the reissner-nordstr¨om black hole with higher-order magnetic +correction in einstein-nonlinear-maxwell fields,” Symmetry 14 (2022). +[36] M. C. Werner, “Gravitational lensing in the kerr-randers optical geometry.” Gen. Relat. and Gravi. 44, 3047 (2012). +[37] Asahi Ishihara et al., “Gravitational bending angle of light for finite distance and the gauss-bonnet theorem,” Phys. Rev. D 94, 084015 +(2016). +[38] Asahi Ishihara, Yusuke Suzuki, Toshiaki Ono, and Hideki Asada, “Finite-distance corrections to the gravitational bending angle of light in + +15 +the strong deflection limit,” Phys. Rev. D 95, 044017 (2017). +[39] Toshiaki Ono, Asahi Ishihara, and Hideki Asada, “Gravitomagnetic bending angle of light with finite-distance corrections in stationary +axisymmetric spacetimes,” Phys. Rev. D 96, 104037 (2017). +[40] Gabriel Crisnejo and Emanuel Gallo, “Weak lensing in a plasma medium and gravitational deflection of massive particles using the +gauss-bonnet theorem. a unified treatment,” Phys. Rev. D 97, 124016 (2018). +[41] Zonghai Li and Ali ¨Ovg¨un, “Finite-distance gravitational deflection of massive particles by a kerr-like black hole in the bumblebee gravity +model,” Phys. Rev. D 101, 024040 (2020). +[42] Zonghai Li, Guodong Zhang, and Ali ¨Ovg¨un, “Circular orbit of a particle and weak gravitational lensing,” Phys. Rev. D 101, 124058 +(2020). +[43] J. H. Oort, “The force exerted by the stellar system in the direction perpendicular to the galactic plane and some related problems,” Astron. +Inst. Netherlands 6, 249 (1932). +[44] Fritz. Zwicky, “On the masses of nebulae and of clusters of nebulae.” The Astrophys. J. 86, 217 (1937). +[45] J. L. Feng, “Dark Matter Candidates from Particle Physics and Methods of Detection.” The Astrophys. J. Supple. Ser. 48, 495–545 (2010). +[46] N. Jarosik et al., Astrophys. J., Suppl. Ser. 192, 14 (2011). +[47] A. ¨Ovg¨un, “Deflection angle of photons through dark matter by black holes and wormholes using gaussbonnet theorem,” Universe 5, 115 +(2019). +[48] Reggie C. Pantig and Ali ¨Ovg¨un, “Dark matter effect on the weak deflection angle by black holes at the center of Milky Way and M87 +galaxies,” Eur. Phys. J. C 82, 391 (2022), arXiv:2201.03365 [gr-qc]. +[49] Reggie C. Pantig and Emmanuel T. Rodulfo, “Weak deflection angle of a dirty black hole,” Chin. J. Phys. 66, 691–702 (2020). +[50] Reggie C. Pantig and Ali ¨Ovg¨un, “Black hole in quantum wave dark matter,” Fortsch. Phys. 2022, 2200164 (2022), arXiv:2210.00523 +[gr-qc]. +[51] Reggie C. Pantig and Ali ¨Ovg¨un, “Dehnen halo effect on a black hole in an ultra-faint dwarf galaxy,” JCAP 08, 056 (2022), +arXiv:2202.07404 [astro-ph.GA]. +[52] Kazunori Akiyama et al. (Event Horizon Telescope), “First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive +Black Hole,” Astrophys. J. Lett. 875, L1,17 (2019). +[53] Kazunori Akiyama et al. (Event Horizon Telescope), “First Sagittarius A* Event Horizon Telescope Results. I. The Shadow of the +Supermassive Black Hole in the Center of the Milky Way,” Astrophys. J. Lett. 930, L12 (2022). +[54] James M. Bardeen, William H. Press, and Saul A. Teukolsky, “Rotating Black Holes: Locally Nonrotating Frames, Energy Extraction, +and Scalar Synchrotron Radiation,” Astrophys. J. 178, 347–370 (1972). +[55] J. L. Synge, “The Escape of Photons from Gravitationally Intense Stars,” Mon. Not. Roy. Astron. Soc. 131, 463–466 (1966). +[56] J. P. Luminet, “Image of a spherical black hole with thin accretion disk,” Astron. Astrophys. 75, 228–235 (1979). +[57] Ramesh Narayan, Michael D. Johnson, and Charles F. Gammie, “The shadow of a spherically accreting black hole,” The Astrophysical +Journal 885, L33 (2019). +[58] Yang Guo and Yan-Gang Miao, “Charged black-bounce spacetimes: Photon rings, shadows and observational appearances,” Nucl. Phys. B +983, 115938 (2022), arXiv:2112.01747 [gr-qc]. +[59] Reggie C. Pantig, Paul K. Yu, Emmanuel T. Rodulfo, and Ali ¨Ovg¨un, “Shadow and weak deflection angle of extended uncertainty principle +black hole surrounded with dark matter,” Annals Phys. 436, 168722 (2022), arXiv:2104.04304 [gr-qc]. +[60] R. A. Konoplya and A. Zhidenko, “Solutions of the Einstein Equations for a Black Hole Surrounded by a Galactic Halo,” Astrophys. J. +933, 166 (2022), arXiv:2202.02205 [gr-qc]. +[61] R. A. Konoplya, “Shadow of a black hole surrounded by dark matter,” Phys. Lett. B 795, 1–6 (2019), arXiv:1905.00064 [gr-qc]. +[62] Zhaoyi Xu, Xian Hou, Xiaobo Gong, and Jiancheng Wang, “Black Hole Space-time In Dark Matter Halo,” JCAP 09, 038 (2018). +[63] Zhaoyi Xu, Xiaobo Gong, and Shuang-Nan Zhang, “Black hole immersed dark matter halo,” Phys. Rev. D 101, 024029 (2020). +[64] Reggie C. Pantig and Emmanuel T. Rodulfo, “Rotating dirty black hole and its shadow,” Chin. J. Phys. 68, 236–257 (2020), +arXiv:2003.06829 [gr-qc]. +[65] Wajiha Javed, Hafsa Irshad, Reggie C. Pantig, and Ali vgn, “Weak deflection angle by kalb-ramond traversable wormhole in plasma and +dark matter mediums,” Universe 8 (2022), 10.3390/universe8110599. +[66] Wajiha Javed, Sibgha Riaz, Reggie C. Pantig, and Ali ¨Ovg¨un, “Weak gravitational lensing in dark matter and plasma mediums for +wormhole-like static aether solution,” Eur. Phys. J. C 82, 1057 (2022), arXiv:2212.00804 [gr-qc]. +[67] Kimet Jusufi, Mubasher Jamil, and Tao Zhu, “Shadows of Sgr A∗ black hole surrounded by superfluid dark matter halo,” Eur. Phys. J. C +80, 354 (2020), arXiv:2005.05299 [gr-qc]. +[68] Sourabh Nampalliwar, Saurabh Kumar, Kimet Jusufi, Qiang Wu, Mubasher Jamil, and Paolo Salucci, “Modeling the Sgr A* Black Hole +Immersed in a Dark Matter Spike,” Astrophys. J. 916, 116 (2021), arXiv:2103.12439 [astro-ph.HE]. +[69] Alex Simpson Berry, Thomas and Matt Visser, “General class of” quantum deformed” regular black holes,” Universe 7 (2021). +[70] Hyat Huang and Jinbo Yang, “Charged ellis wormhole and black bounce,” Phys. Rev. D 100, 124063 (2019). +[71] Edgardo Franzin et al., “Charged black-bounce spacetimes,” J. Cosmo. and Astropart. Phys. 07 (2021). +[72] M. S. Morris and K. S. Thorne, “Wormholes in space-time and their use for interstellar travel: A tool for teaching general relativity,” Am. J. +Phys. 56, 395–412 (1988). +[73] Sean A. Hayward, “Formation and evaporation of nonsingular black holes,” Phys. Rev. Lett. 96, 031103 (2006). +[74] Yang Guo and Yan-Gang Miao, “Bounce corrections to gravitational lensing, quasinormal spectral stability and gray-body factors of +Reissner-Nordstr¨om black holes,” (2022), arXiv:2201.02971 [gr-qc]. +[75] Francisco S. N. Lobo, Manuel E. Rodrigues, Marcos V. de S. Silva, Alex Simpson, and Matt Visser, “Novel black-bounce spacetimes: +Wormholes, regularity, energy conditions, and causal structure,” Phys. Rev. D 103, 084052 (2021). +[76] Alex Simpson, Prado Mart´ın-Moruno, and Matt Visser, “Vaidya spacetimes, black-bounces, and traversable wormholes,” Classical and +Quantum Gravity 36, 145007 (2019). + +16 +[77] G W Gibbons and M C Werner, “Applications of the gaussbonnet theorem to gravitational lensing.” Class. Quantum Grav. 25, 235009 +(2008). +[78] David C. Latimer, “Dispersive light propagation at cosmological distances: Matter effects,” Phys. Rev. D 88, 063517 (2013). +[79] G. W. Gibbons and S. W. Hawking, “Cosmological event horizons, thermodynamics, and particle creation,” Phys. Rev. D 15, 2738–2751 +(1977). +[80] Petarpa Boonserm and Matt Visser, “Bounding the bogoliubov coefficients,” Annals of Physics 323, 2779–2798 (2008). +[81] Boonserm. P, “Rigorous bounds on transmission, reflection and bogoliubov coefficients,” Ph.D. thesis, Victoria Uni. Wellington (2009). +[82] Tritos Ngampitipan and Petarpa Boonserm, “Bounding the greybody factors for non-rotating black holes,” J. of Mod. Phys. D 22, 1350058 +(2013). +[83] Volker Perlick, Oleg Yu. Tsupko, and Gennady S. Bisnovatyi-Kogan, “Influence of a plasma on the shadow of a spherically symmetric +black hole,” Phys. Rev. D 92, 104031 (2015). +[84] Reggie C. Pantig and Ali ¨Ovg¨un, “Testing dynamical torsion effects on the charged black hole’s shadow, deflection angle and greybody +with M87* and Sgr. A* from EHT,” Annals Phys. 448, 169197 (2023), arXiv:2206.02161 [gr-qc]. + diff --git a/j9AzT4oBgHgl3EQf5P4V/content/tmp_files/load_file.txt b/j9AzT4oBgHgl3EQf5P4V/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..f89d973c23b52ea6f12bf8fd880a11e443565043 --- /dev/null +++ b/j9AzT4oBgHgl3EQf5P4V/content/tmp_files/load_file.txt @@ -0,0 +1,878 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf,len=877 +page_content='Weak Deflection Angle, Hawking Radiation and Greybody Bound of Reissner-Nordstr¨om Black Hole Corrected by Bounce Parameter Wajiha Javed,1, ∗ Mehak Atique,1, † Reggie C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Pantig,2, ‡ and Ali ¨Ovg¨un3, § 1Department of Mathematics, Division of Science and Technology, University of Education, Lahore-54590, Pakistan 2Physics Department, Map´ua University, 658 Muralla St.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=', Intramuros, Manila 1002, Philippines 3Physics Department, Eastern Mediterranean University, Famagusta, 99628 North Cyprus via Mersin 10, Turkey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (Dated: January 6, 2023) In this study, we probe the weak lensing by a Reissner–Nordstr¨om black hole corrected by bounce parameter in plasma and dark matter mediums.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For this, the optical geometry and the Gibbons–Werner approach are utilized to obtain the bending angle in the weak field limitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We examine that the impact of these mediums increases the black hole’s bending angle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In addition, we graphically study the deflection angle of light with respect to the impact parameter and examine that the bounce parameter directly affects the angle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Further, we compute the Hawking radiation via a topological method involving two invariants and verify our obtained result with the standard method of calculating the Hawking temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In addition, we compute the greybody factor’s bound of the black hole.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Moreover, we analyze the bound graphically and observe that the bound shows convergent behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We also study that our attained results reduce the results of the Reissner–Nordstr¨om and Schwarzschild black holes by reducing the parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Finally, we probe how the bounce parameter affected the shadow radius and compared it to the shadow produced if the black hole is immersed in plasma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' It is revealed that the rate at which the shadow radius changes with respect to r easily tends to zero under the effect of the bounce parameter, while the plasma merely increases the shadow radius.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' PACS numbers: 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='Sf, 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='Sb, 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='Lf Keywords: General Relativity;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Bending Angle;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Gauss-Bonnet Theorem;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Plasma Medium;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Black Hole;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Greybody;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Hawking Temperature I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' INTRODUCTION General relativity (GR) is the theory of gravity proposed by Einstein in 1916.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In GR, Einstein gave the idea of black holes (BHs) [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Acknowledging Newton’s corpuscular theory of light, which assumed that photons are ultra-light particles, geologist Michell proposed the presence of dark stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Today, these dark stars are known as BHs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Black holes are fascinating astronomical objects with a gravitational attraction so powerful that nothing can escape them, not even light.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' According to the no-hair theorem, all astrophysical BHs are fully defined by their masses and spins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' A BH has two main components: the singularity and the event horizon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The question which attains the most attention in GR is about the inner structure of a BH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' However, because of the presence of the spacetime singularity, where the curvature deviates continuously, and GR breaks down, this question cannot be answered easily.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The singularity theorems, presented by Penrose and Hawking [2], state that gravitational collapse with physically valid circumstances always results in the formation of a singularity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Even in some scenarios (such as a cosmological constant in the spacetime area), the singularity theorem’s assumptions may not be applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Black holes having regular centers or, in other words, having no singularity are called regular BHs or non-singular BHs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The first regular BH with horizons and no core singularity [3] was proposed by Bardeen [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Near the origin, the Bardeen BH behaves like a de Sitter spacetime, however, for r → ∞, it acts like a Schwarzschild BH [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Later, Ayon-Beato and Garcia [5] showed that Bardeen’s model is an accurate solution of GR connected to non-linear electrodynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' There has been significant progress in the study and application of regular BHs [6–8], as well as regular rotating black holes [8, 9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Most of these solutions are based on Bardeen’s concept, which uses non-linear electrodynamics as a source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' According to Hawking [10], a BH can emit heat radiation by taking into account the quantum consequences, and Hawking radiation is a term used to describe this type of heat radiation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The production and annihilation of particles are theoretically feasible in the context of quantum field theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' When pair creation occurs near the BH’s horizon, one of the particles from the pair falls back to the BH while the other particles depart from the BH’s horizon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The particles that exit are observed by an outside observer as Hawking radiation [11–13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' According to GR, the spacetime bend by a BH acts as a gravitational potential inside which the particles move.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Some Hawking radiation is returned to the BH while the remainder passes through the potential at infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In this aspect, the transmission probability is known as the greybody factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Several methods for obtaining Hawking ∗ wajiha.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='javed@ue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='pk † mehakatique1997@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='com ‡ rcpantig@mapua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='ph § ali.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='ovgun@emu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='tr arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='01855v1 [gr-qc] 5 Jan 2023 2 radiation have been proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Using a topological method, Zhang, Wei, and Liu [14] investigated the Hawking temperature of the BTZ BH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' ¨Ovg¨un et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [15] computed the Hawking temperature for BHs by applying the topological strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Kruglov [16] explored the Hawking temperature of a magnetically charged BH through surface gravity and horizon in the context of non-linear electrodynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The greybody factor can be calculated in a variety of ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The matching approach can be used to derive an estimated greybody factor [17–19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The WKB approximation may be used to calculate the greybody factor if the gravitational potential is high enough [20, 21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The greybody factor may also be calculated using the rigorous bound, which is an alternative to approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The bound can be used to describe a BH qualitatively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Visser [22] inspected some extremely wide reflected and transmitted coefficient constraints for one-dimensional potential scattering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Boonserm and Visser [23] calculated the greybody factor’s bound for Schwarzschild BHs by examining the Regge-Wheeler equation for wave phase angular momentum and arbitrary particle spin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Javed, Hussain, and ¨Ovg¨un [24] worked out the boundaries of the Kazakov SolodukhinBH’s greybody factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The gravitational lensing (GL) effect states that a light beam would be distorted while passing by a huge object, which is one of GR’s most important predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For determining the mass of galaxies and clusters [25, 26], as well as discovering dark energy and dark matter (DM) [27], GL has become one of the most powerful instruments in astronomy and cosmology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Since the first measurements of the Sun’s gravitational bending of light, the lens equation has been used to examine the GL effects for BHs, wormholes, cosmic strings, and other objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Strong GL and weak GL are the types of GL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Strong GL is a GL effect that is intense enough to generate many pictures, such as arcs or Einstein’s rings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In this type of GL, geometry is favorable and bending is rather large, whereas the weak GL is a GL impact that is not intense to create multiple pictures, and the geometry is less suitable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Since the 19th century, various studies on the GL have been done not only for the BHs but also for the wormholes, cosmic strings, global monopoles, and neutron stars [28–35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Gibbons and Werner (GW) presented a technique for calculating the deflection angle in 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The Gauss-Bonnet theorem (GBT) and the optical geometry of the BH’s spacetime, where the source and viewer are in asymptotic areas, were used to develop their technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Werner [36] soon expanded this approach to stationary spacetimes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In GBT, one can utilize the domain GS confined by the light ray as well as a circular boundary curve CS placed at the lens’s center where the light ray intersects at the source and receiver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The source and receiver are considered at the same coordinate distance S from the lens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The GBT in optical metric and by using weak field approximation can be expressed as follows � � GS ˜KdS + � ∂GS kdt + � i ϵi = 2πX(GS), (1) wherein ˜K indicates the Gaussian optical curvature, k stands for geodesic curvature, dS is a surface element of optical geometry, and GS is a region accommodates the light rays source, the referential of the observer and the lens’s center.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' At ith vertex, ϵi represents the exterior angles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Just for simplicity, we suppose that as long as radial distance S → ∞, the sum of the external angles θi becomes π for the observer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The asymptotic deflection angle ˜α can be calculated as: ˜α = − � π 0 � ∞ b sin(φ) ˜KdS, (2) where b represents the impact parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The integral of GBT may be solved in an infinite region confined by a ray of light.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Instead of utilizing the asymptotic receiver and source, Ishihara et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [37, 38] modified this approach for finite distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The finite-distances approach was then applied to the axisymmetric spacetimes by Ono et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [39].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In the plasma medium, the GBT was used by Crisnejo, and Gallo [40] to determine the gravitational bending of light.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Using massive particles and the Jacobi–Maupertuis Randers–Finsler metric inside GBT, Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [41, 42] investigated the finite-distance impacts on the weak bending angle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Beginning from the first startling discoveries by Oort [43] of missing matter in the Galactic disk, which modern observations have not confirmed, and by Zwicky, [44] the discovery of missing matter in the Coma cluster, much later understood to be “DM”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Dark matter comprises particles that do not absorb, reflect, or emit light, making it impossible to detect them using electromagnetic radiation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Only gravitational interactions can detect DM, and we know that DM is non-baryonic, non-relativistic, and possesses weak non-gravitational interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Weakly interacting massive particles (WIMPs), super-WIMPs, axions, and sterile neutrinos are the four candidates of DM [45].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Dark matter constitutes about 85% of the total mass of the Universe [46] and is used to explain the strange behavior of stars and galaxy dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In DM medium, Pantig and ¨Ovg¨un [47–51] studied the weak GL by wormholes and BHs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The light that passes close to the BH is refracted by the gravitational field, creating the BH’s shadow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The BH’s shadow is a dark area frequently surrounded by a luminous ring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The BH’s mass and angular momentum determine its size and form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Many scientists have attempted to predict how the observable appearance of a BH surrounded by bright material would seem before the spectacular finding of the BH’s shadow produced by Event Horizon Telescope Collaboration [52, 53].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For instance, Bardeen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [54] examined the shadows of Kerr BH’s, while Synge [55] studied the shadows of Schwarzschild spacetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The bright accretion disc surrounding the BH was manually drawn by Luminet [56].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In addition, due to the transparent emissions close to the BH, it is predicted that a BH would display its shadow, which is brought on by gravitational light deflection and photons 3 captured at its event horizon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The photon ring, a geometric characteristic of spacetime, determines the shadow radius [57].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' To our knowledge, few studies about RN black hole with bounce parameter have been conducted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For instance, [58] has considered the photon rings and shadows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In this study, we ought to analyze such a metric under the influence of plasma and dark matter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' With the shadow cast, it can also determine imprints of spacetime, and several studies were also conducted about using the black hole for dark matter detection [50, 51, 59–68].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' This paper aims to study the weak GL of black bounce Reissner–Nordstr¨om spacetime utilizing optical geometry and GBT in plasma and DM mediums.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Moreover, it would be interesting to calculate the Hawking temperature and greybody bound of the BH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We will also study the graphical behavior of the deflection angle and greybody bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We will focus on how the bounce parameter (introduced in Reissner–Nordstr¨om BH) affects the bending angle, Hawking temperature, and the bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The layout of our paper is given as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Section II is based on the discussion about the black bounce Reissner–Nordstr¨om spacetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In Section III, we obtain the optical metric from a four-dimensional spherically symmetric metric and then compute the deflection angle of the BH with plasma medium by using GBT, and analyze its graphical behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The computation of the bending angle in the case of DM medium is given in Section IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Section V, is devoted to investigating the Hawking temperature of black bounce Reissner–Nordstr¨om BH via GBT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The computation of the greybody factor’s bound of black bounce Reissner–Nordstr¨om BH, and the graphical behavior of the bound is addressed in Section VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Finally, Section VII discusses the shadow behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The purpose of Section VIII is to sum up the findings of this research and propose a research direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Throughout the paper, we used natural units G = c = 1 and metric signature (−, +, +, +).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' BLACK BOUNCE REISSNER-NORDSTR ¨OM SPACETIME One of the most significant problems is the prediction of the spacetime singularity within a BH or at the start of the universe, which implies that the GR theory fails there.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' To address the singularity issue, Bardeen [4] was the first who put up the idea of regular BHs, which has continuously attracted scientific interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' It is convenient to consider the regular BH due to the problematic nature of spacetimes singularities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Regular BHs are solutions of the gravity equations that have an event horizon but no singularities in spacetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Based on the bounce and quantum corrections, a wide range of regular black hole solutions have been attained [69–71].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The black bounce spacetime smoothly interpolates among ordinary Schwarzschild BH and Morris–Thorne traversable wormhole [72].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' It is noteworthy to remark that throughout the geometry is regular and one can have a distinct type of “regular BH”, where the “origin” r = 0 can either be spacelike, null, or timelike, as long as the parameter a ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Additionally, it was demonstrated that the spacetime metric may be utilized to characterize several interesting physical circumstances, such as a developing black-bounce, a wormhole to black-bounce transfer, and the opposite black-bounce to wormhole transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In Reissner–Nordstr¨om BH, a regularizing process was recently proposed [71], which does not produce a standard regular BH [73], such as the Bardeen or Hayward BHs, rather, it produces a charged regular BH called a black bounce Reissner–Nordstr¨om or charged black bounce.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In a static spherically symmetric spacetime, the line-element for the black bounce Reissner–Nordstr¨om BH can be described as [74] ds2 = −f(r)dt2 + dr2 f(r) + h(r)2(dθ2 + sin2 θdφ2), (3) where the metric function f(r) and h(r)2 are defined as f(r) = 1 − 2m √ r2 + a2 + Q2 r2 + a2 and h(r)2 = r2 + a2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In the metric function, m stands for the mass of the BH, Q indicates the charge, and a represents the bounce parameter of black bounce Reissner–Nordstr¨om.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Several characteristics of the black bounce family have been thoroughly investigated [75, 76].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Some properties are that the curvature singularities are absent from the black bounce family on a global scale and satisfy all observable weak field tests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Based on the values of charge Q and bounce parameter a, one can easily interpolate the Reissner–Nordstr¨om and Schwarzschild BHs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' By taking charge Q ̸= 0 and bounce parameter a = 0, one can obtain the Reissner–Nordstr¨om BH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' If we consider the charge Q = 0 and bounce parameter a = 0, then we can obtain the Schwarzschild BH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The event horizon of the Reissner–Nordstr¨om BH corrected by bounce parameter can be computed by taking f(r) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' rh = � (m + � m2 − Q2)2 − a2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' One can observe that a coordinate speed of light may be described in terms of radial null curves (ds2 = dθ = dφ = 0), since the radial coordinate r ∈ (−∞, +∞) [74]: C(r) = ���� dr dt ���� = f(r) = 1 − 2m √ r2 + a2 + Q2 r2 + a2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (4) 4 Thus in this spacetime, a sphere’s area at r (radial coordinate) has the following form A(r) = 4πh(r)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The wormhole throat is where the area is minimized, and by observing the state, one may determine where the throat is A′(ro) = 0, where the throats location is represented by r0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The wormhole throat’s radius is thus given by h0 = h(r0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We now divide this geometry into three categories [74]: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The outer and inner horizon exist at rh for a < (m + � m2 − Q2) and | Q |< m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In this instance, ∃ rh ∈ R∗ and c(rh) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Since light has a zero coordinate speed, it cannot escape the horizon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' This geometry indicates a charged regular black hole with usual outer and inner horizons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' One can obtain one extremal horizon rh = 0, when a = (m + � m2 − Q2) and | Q |< m, and we know ∃ rh = 0 and c(rh) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For this case, the geometry represents the extremal charged regular BH, which is the one-way charged traversable wormhole with single extremal null throat at rh = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For the case when, a > (m + � m2 − Q2) and whether | Q |< m or | Q |> m , there is no horizons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' So, we have ∀ radial coordinate r ∈ (−∞, +∞) and c(r) ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' This case represents a two-way charged traversable wormhole and the light can travel throughout the domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' PLASMA INFLUENCED DEFLECTION ANGLE Guo and Miao [74] have calculated the deflection angle by Reissner–Nordstr¨om BH corrected by bounce parameter in non- plasma medium utilizing GBT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Now, in this section, we see how the presence of a plasma affects the bending of light by black bounce Reissner–Nordstr¨om BH specified by charge Q and bounce parameter a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In the scenario of plasma medium, the refractive index for the black bounce Reissner–Nordstr¨om is described as [40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' n(r) = � 1 − δ � 1 − 2m √ r2 + a2 + Q2 r2 + a2 � , (5) where, δ = ω2 e ω2∞ and the plasma parameters ωe and ω∞ represents electron plasma frequency and photon frequency as marked by the static investigator at infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For static spherically symmetric metric in Equation (3) assuming that the source of light and viewer are on the equatorial region (θ = π 2 ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Because we are working with null geodesics, we use (ds2 = 0) to find the appropriate optical metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' opticalmetric = n2dt2 = gpqdxpdxq = n2(r) � 1 (f(r))2 dr2 + h(r)2 f(r) dφ2 � , (6) where, p,q ∈ {1, 2}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In order to determine the optical Gaussian curvature ˜K from the optical metric Equation (6), we use the following expression ˜K = R 2 , (7) where R is the Ricci scalar calculated using the optical metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Utilizing Equation (7) the Gaussian optical curvature of the black bounce Reissner–Nordstr¨om BH in plasma medium is calculated as ˜K ≃ −2m r3 + 3Q2 r4 − 6mQ2 r5 + 5Q2ω2 e r4ω2∞ − 3mω2 e r3ω2∞ − 26mQ2ω2 e r5ω2∞ − 12a2Q2 r6 − a2 r4 + 28a2mQ2 r7 + 10a2m r5 − 20a2Q2ω2 e r6ω2∞ − a2ω2 e r4ω2∞ + 115a2mQ2ω2 e r7ω2∞ + 31a2mω2 e 2r5ω2∞ + O(m2, a4, Q4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (8) The obtained value of Gaussian optical curvature in Equation (8) will be used later to compute the bending angle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Now, to acquire the deflection angle of the black bounce Reissner–Nordstr¨om BH in a plasma medium, we make use of the GBT, which is defined as follows [77] � � GS ˜KdS + � ∂GS kdt + � i ϵi = 2πX(GS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (9) As in above equation,k describes the geodesic curvature which is defined as k = g(∇ ˙γ ˙γ, ¨γ) wherein g(˙γ, ˙γ) = 1, ¨γ illustrates the unit acceleration vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' At ith vertex, ϵi represents the external angle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' As S → ∞, the jump angles becomes π 2 so that we 5 obtain θo + θS → π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Since GS is a non-singular region, the Euler characteristic X(GS) is equal to 1, and the following result is obtained � � GS ˜KdS + � ∂GS kdt + ϵi = 2πX(GS), (10) where ϵi = π denotes the total angle of jumps, and since S → 0, the effective element is acquired as k(ES) =| ∇ ˙ES ˙ES | .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (11) The geodesic curvature’s radial component is expressed as follows [77]: (∇ ˙ES ˙ES)r = ˙Eφ S∂φ ˙Er S + Γr φφ( ˙Eφ S)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (12) For very large S, we obtain (∇ ˙Er S ˙Er S)r → 1 S .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (13) It is asserted that the geodesic curvature is independent of topological defects, implying that k(ES) → 1 S .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Utilizing the optical metric given in Equation (6), one can write dt = Sdφ and have k(ES)dt = dφ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Now, using all the above-obtained results and the straight line approximation r = b sinφ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The bending angle ˜α can be calculated by using the formula below: ˜α = − � π 0 � ∞ b/ sin φ ˜KdS, (14) where dS = √detgdrdφ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Using Equation (14) and the value of optical Gaussian curvature Equation (8), the bending angle ˜α of the black bounce Reissner–Nordstr¨om in plasma medium up to the leading order terms is calculated as ˜α ≃ 4m b + 2mω2 e bω2∞ − 8mQ2 3b3 − 3πQ2 4b2 + 2mQ2ω2 e b3ω2∞ − πQ2ω2 e 2b2ω2∞ − 8a2m 3b3 + a2π 4b2 + 64a2mQ2 15b5 + 27a2πQ2 32b4 − 4a2mω2 e 3b3ω2∞ − 16a2mQ2ω2 e 5b5ω2∞ + 9a2πQ2ω2 e 16b4ω2∞ + O(m2, a4, Q4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (15) The above deflection angle depends on the mass m, charge Q, bounce parameter a, impact parameter b and the plasma parameters i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=', ωe and ω∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The terms without the bounce parameter a and which contain charge are due to the charged nature of the BH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The remaining terms are due to the corrections with the bounce parameter a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For a = 0, one can find the bending angle ˜α of a Reissner–Nordstr¨om BH in a plasma medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' By neglecting the charge and bounce parameter, the obtained angle ˜α reduces to the bending angle of the Schwarzschild BH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We also observe that the effect of the plasma increases the deflection angle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The bending angle is inversely proportional to the photon frequency, so the bending angle increase by lowering the photon frequency and assuming the electron frequency is fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Moreover, one can attain the bending angle in the case of non-plasma medium [74] if we take ωe = 0 or (δ → 0) in the derived deflection angle in Equation (15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We also observe that the obtained deflection angle Equation (15) is directly proportional to the mass m, charge Q, bounce parameter a, and inversely proportional to the impact parameter b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Graphical Behaviour: Now we look into the graphical behavior of the black bounce Reissner–Nordstr¨om BH’s deflection angle ˜α relative to the impact parameter b, for the fixed value of mass m and charge Q, while varying bounce parameter a and plasma term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For fixed values of mass m and charge Q, ωe ω∞ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='1 and varying the values of bounce parameter a, Figure 1 depicts the graph of deflection angle ˜α vs impact parameter b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For a ≥ 0, we investigate that at the small values of impact parameter b, one can obtain the maximum value of the bending angle ˜α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' A the value of b increases, the bending angle ˜α exponentially decreases and approaches zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' It is observed that for the small values of b, one can obtain the positive angle (deflection in the upward direction).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Further, we examine that the bending angle ˜α shows the inverse relationship with the impact parameter b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Moreover, physically the bending angle ˜α represents the stable behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Figure 2 depicts the behaviour of deflection angle ˜α with respect to the impact parameter b for Q = a = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5 and 0 ≤ ωe ω∞ ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We examine that the deflection angle ˜α decreases exponentially and almost approaches to zero as the value of impact parameter b goes to infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Furthermore, in this case, bending angle ˜α shows the inverse relation with the impact parameter b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Figure 3 exhibits the behaviour of deflection angle ˜α with respect to impact parameter b for Q = a = 1 and 0 ≤ ωe ω∞ ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We examine that the deflection angle ˜α decreases exponentially and almost approaches to zero as the value of impact parameter b goes to infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In both cases (Q = a = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5, 1), angle behavior is stable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' It is observed that the for Q = 1 deflection angle ˜α relative to the impact parameter b by varying bounce parameter a shows that similar behavior as for the Q = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 6 a=0m a=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='2m a=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='3m a=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='4m a=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5m 0 20 40 60 80 100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5 b m α˜ m=1,Q=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5m FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Bending angle’s variation ˜α as a function of impact parameter b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' DARK MATTER’S INFLUENCE ON DEFLECTION ANGLE This section primarily concerns the calculations of black bounce Reissner–Nordstr¨om BH’s deflection angle in DM medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The dark-atom concept has been offered as a composite of DM, which we study here using light’s bending phenomenon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Dark matter possesses electromagnetic interactions due to its frequency-dependent refractive index [78], and this medium has particular optical characteristics that a traveling photon may detect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The refractive index determines how fast a wave moves through a medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In this regard, the refractive index for the black bounce Reissner–Nordstr¨om BH is defined as [78].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' n(ω) = 1 + βA0 + A2ω2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (16) The frequency of a photon is represented by ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Here, it is examined that β = ρ0 4m2ω2 , where ρ0 represents the mass density of dispersed particles of DM, A0 = −2ε2e2 and A2 ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The optical Gaussian curvature ˜K of the black bounce Reissner–Nordstr¨om BH in DM medium up to the leading order terms by using the Equation (7) can be calculated as ˜K ≃ 3Q2 r4(1 + A2ω2 + A0β)2 − a2 r4(1 + A2ω2 + A0β)2 − 12Q2a2 r6(1 + A2ω2 + A0β)2 − 2m r3(1 + A2ω2 + A0β)2 − 6Q2m r5(1 + A2ω2 + A0β)2 + 10a2m r5(1 + A2ω2 + A0β)2 + 28Q2a2m r7(1 + A2ω2 + A0β)2 + O(m2, a4, Q4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (17) The bending angle ˜α black bounce Reissner–Nordstr¨om BH in DM medium by using Equations (17) and (14) up to the leading order terms can be computed as ˜α ≃ 4m b(1 + A2ω2 + A0β)2 − 8a2m 3b3(1 + A2ω2 + A0β)2 + a2π 4b2(1 + A2ω2 + A0β)2 + 64a2mQ2 15b5(1 + A2ω2 + A0β)2 − 8mQ2 3b3(1 + A2ω2 + A0β)2 + 27a2πQ2 32b4(1 + A2ω2 + A0β)2 − 3πQ2 4b2(1 + A2ω2 + A0β)2 − 16a2mA2ω2 3b3(1 + A2ω2 + A0β)2 + 8mA2ω2 b(1 + A2ω2 + A0β)2 7 ωe ω∞ =0 ωe ω∞ =0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='2 ωe ω∞ =0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='4 ωe ω∞ =0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='6 ωe ω∞ =1 0 20 40 60 80 100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='7 b m α˜ m=1,Q=a=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5m FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Bending angle’s variation ˜α as a function of impact parameter b, for Q = a = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' + a2πA2ω2 2b2(1 + A2ω2 + A0β)2 + 128a2mQ2A2ω2 15b5(1 + A2ω2 + A0β)2 − 16mQ2A2ω2 3b3(1 + A2ω2 + A0β)2 + 27a2πQ2A2ω2 16b4(1 + A2ω2 + A0β)2 − 3πQ2A2ω2 2b2(1 + A2ω2 + A0β)2 (18) + O(m2, a4, Q4, A2 2, ω4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (19) The BH’s mass m, charge Q, bounce parameter a, impact parameter b, and DM parameters all are the parameters of the measured deflection angle in Equation (19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' It is to be observed that the photon deflected through the DM around the black bounce Reissner–Nordstr¨om BH has a large bending angle as compared to the vacuum case [74].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' By eliminating the DM effect, the angle Equation (19) reduces to the bending angle in the case of vacuum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' By considering Q ̸= 0 and a = 0 in Equation (19), one can obtain the expression of the Reissner–Nordstr¨om BH’s deflection angle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We also find that taking charge Q = 0 and a = 0 in Equation (19) the obtained angle reduces to the Schwarzschild BH’s deflection angle in DM medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' HAWKING RADIATION In this part, we use a topological technique based on the GBT and Euler characteristic to derive the Hawking temperature of a black bounce Reissner–Nordstr¨om BH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' To derive the Hawking temperature using the topological approach, one can utilize the Wick rotation [79] to use the Euclidean geometry of the two-dimensional spacetime without missing any facts from the four-dimensional spacetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The spherically static symmetric spacetime of black bounce Reissner–Nordstr¨om BH is defined in Equation (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rewriting the four-dimensional metric into the two-dimensional coordinates by using the Wick rotation condition i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=', (θ = π 2 ) and (τ = it) ds2 = � 1 − 2m √ r2 + a2 + Q2 r2 + a2 � dτ 2 + 1 � 1 − 2m √ r2+a2 + Q2 r2+a2 �dr2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (20) The formula to compute the Hawking temperature TH of black bounce Reissner–Nordstr¨om BH after using all the values of the 8 ωe ω∞ =0 ωe ω∞ =0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='2 ωe ω∞ =0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='4 ωe ω∞ =0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='6 ωe ω∞ =1 0 20 40 60 80 100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='7 b m α˜ m=1,Q=a=1m FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Bending angle’s variation ˜α as a function of impact parameter b, Q = a = m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' physical constants is defined as [15] TH = 1 4πX � rh √gRdr, (21) where, g = 1 is the determinant of Equation (20) and rh is the event horizon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Using the values of Ricci scalar R, Euler characteristic X = 1 and integrating along the event horizon, the Hawking temperature TH of black bounce Reissner–Nordstr¨om BH is calculated as TH = �� m+√ m2−Q2 �2−a2√ m2−Q2 2π � m+√ m2−Q2 �3 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (22) One can observe that the obtained expression of the Hawking temperature TH black bounce Reissner–Nordstr¨om BH depends on the mass m, charge Q of the BH, and bounce parameter a similarly with [71].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We also notice that the Hawking temperature via standard technique gives the same expression as the topological technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For the case Q ̸= 0 and a = 0, the obtained Hawking temperature in Equation (22) reduces to the Hawking temperature of Reissner–Nordstr¨om BH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Moreover, the attained Hawking temperature Equation (22) reduces to the Schwarzschild–Hawking temperature, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=', TH = 1 8mπ by taking Q = a = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' To observe the behavior of Hawking temperature graphically, we plot the graph between Hawking temperature TH and bounce parameter a in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We observe that for Q = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5 the Hawking temperature decreases exponentially.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' GREYBODY FACTOR This section mainly examines the greybody factor bound of the black bounce Reissner–Nordstr¨om BH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Much research has been dedicated to estimating the greybody factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' There is, however, a distinct analytic approach for obtaining bounds on the greybody components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The line-element for the Reissner–Nordstr¨om BH corrected by bounce parameter in a static spherically symmetric spacetime is defined in Equation (3) The lower bounds on transmission probability T can be defined as [22, 80, 81].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' T ≥ sech2 � 1 2ω � ∞ −∞ ϱdr∗ � , (23) 9 Q=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='0385 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='0390 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='0395 a TH m=1 FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Hawking Temperature TH vs the bounce parameter a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' where ϱ = � [g′(r∗)]2 + [ω2 − V(r∗) − g2(r∗)]2 2g(r∗) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' where r∗ represents the tortoise coordinate, and g is a positive function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For the radial part, the equation of motion is given as 1 h(r)2 d dr � h(r)2f(r)du(r) dr � + � ω2 f(r) − l(l + 1) h(r)2 � u(r) = 0, (24) where u(r) indicates the the scalar or vector field oscillating.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Taking dr∗ = 1 f(r)dr, while the potential is defined as [82] V(r) = l(l + 1)f(r) h(r)2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (25) The lower bounds on the transmission probability T for g = ω are given by T ≥ sech2 � 1 2ω � ∞ rh V(r)dr∗ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (26) After substituting in the values of V and dr∗, we obtain the following expression T ≥ sech2 � 1 2ω � ∞ rh l(l + 1) h(r)2 dr � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (27) The greybody bound Tb of the black bounce Reissner–Nordstr¨om BH after putting the value of h(r)2 and integrating along rh is calculated as Tb = T ≥ sech2 � ����� 1 2ω � � � � � � l(l + 1)π 2a − l(l + 1) arctan � � (m+√ m2−Q2)2−a2 a � a � � � � � � � ����� (28) The bound Tb of the black bounce Reissner–Nordstr¨om BH depends upon the mass m, charge Q, and bounce parameter a of the BH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Guo and Miao [74] have also calculated the greybody factor of perturbation fields of the black bounce Reissner–Nordstr¨om BH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' It is observed from the graphs that when the potential of the black bounce Reissner–Nordstr¨om BH is higher, then the bound will be lower.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 10 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Graphical Analysis The purpose of this section is to explain the graphical behavior of greybody bound Tb and the potential of the black bounce Reissner–Nordstr¨om BH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For this purpose, we take the fixed values of charge Q, angular momentum l = 1, 2, and varying bounce parameter a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Figure 5 depicts the graphical behavior of the potential V relative to the r, and greybody factor bound Tb relative to the ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For 0 < a < 2, the potential V increases, and attains its maximum value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' However, as the value of bounce parameter a → 0, the potential exponentially decreases and approaches zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' It is also observed that as the r → 0, the value of potential is high and attains its maximum value, while for the large values of r the potential starts decreasing from the maximum value and almost approaches zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Nevertheless, as the value of a increases, the corresponding bound becomes lower, making it more difficult for the waves to pass through the higher potential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' However, the bound Tb shows the convergent behavior by converging to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 0 5 10 15 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='05 r V m=1,Q=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5 a 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='00 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='50 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='75 0 5 10 15 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='96 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='97 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='98 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='99 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='00 ω Tb m=1,Q=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5 a 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='00 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='50 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='75 FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The left panel shows the potential with l = 1 and corresponding bound Tb is shown in right panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Figure 6 represents the graphical behavior of the potential V with respect to the r and greybody factor bound T with respect to the ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For 0 < a < 2, the bound T shows a similar behavior as for the l = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' SHADOW BEHAVIOR We now turn our attention to the explore the shadow behavior of the black bounce RN black hole.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Let us consider the Hamiltonian for light rays where a non-magnetized cold plasma with plasma frequency ωp(r) is included [83]: H = 1 2 � gikpipk + ωp(r)2� = 1 2 � − p2 t A(r) + p2 r B(r) + p2 φ C(r) + ωp(r)2 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (29) In the equation above, note that C(r) = h(r)2 due to Equation (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Furthermore, we should also note that A(r) = f(r), and B(r) = A(r)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Without compromising generality, we can also restrict ourselves along the equatorial plane (θ = π/2) due to spherical symmetry and derive the equations of motions (EoS) through the following ˙xi = ∂H ∂pi , ˙pi = −∂H ∂xi , (30) which reveals two constants of motion: E = A(r) dt dλ, L = h(r)2 dφ dλ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (31) 11 0 5 10 15 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='05 r V m=1,Q=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5 a 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='00 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='50 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='75 0 5 10 15 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='0 ω Tb m=1,Q=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5 a 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='00 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='50 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='75 FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The left panel shows the potential with l = 2 and corresponding bound Tb is shown in right panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' With the above equation, we can define the impact parameter as b ≡ L E = h(r)2 A(r) dφ dt , (32) and the condition that ds2 = 0, gives the rate of change of the r-coordinate with respect to the azimuthal angle φ: � dr dφ �2 = h(r)2 B(r) �p(r)2 b2 − 1 � , (33) where [83] p(r)2 = h(r)2 A(r) n(r)2 = h(r)2 A(r) � 1 − ω2 e ω2∞ A(r) � (34) since the non-gravitating plasma is assumed to be non-homogenous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' With our metric functions, the condition p′(r) = 0 allows one to find the photonsphere radius [83], and for our case, �ω2 e ω2 0 A(r)2 − A(r) � h′(r)2 + h(r)2A′(r) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (35) With the inclusion of the plasma parameter, finding the analytical expression for rph can be quite lengthy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' However, for the case where there is no plasma (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=', n(r) = 1), we simply found the physical solutions as rph = � 18M 2 − 8Q2 − 4a2 + 6 � 9M 4 − 8M 2Q2 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (36) A static observer at a distance robs from the black bounce black hole can obtain the angular radius αsh of the shadow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' From the black hole’s center to robs, simple geometry shows that ∆x = � B(r)dr and ∆y = h(r)dφ [83]: tan(αsh) = lim ∆x→0 ∆y ∆x = h(r) � 1 B(r) �1/2 dφ dr ���� r=robs , (37) which can be simplified in terms of the critical impact parameter as sin2(αsh) = b2 crit p(robs)2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (38) 12 Here, the bcrit can be obtained using the orbit equation [84]: b2 crit = p(rph) � 2h(rph)2B(rph)p′(rph) − h(rph)2B′(rph)p(rph) + B(rph)h′(rph)2p(rph) � B(rph)h′(rph)2 − h(rph)2B′(rph) (39) The critical impact parameter’s analytical expression with n(r) is somewhat complicated, but for the case n(r) = 1, we find b2 crit = 2h(rph)3 h(rph) − m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (40) Finally, it can be easily shown that, in terms of robs and rph, we obtain the shadow radius as (n(r) = 1) Rsh = �2h(rph)3(h(robs)2 − 2mh(robs) + Q2) h(robs)2(h(rph) − m) �1/2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (41) Next, we plotted Equation (41), which is indicated by the dotted lines in Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We also included in the plot the case where the black hole bounce is surrounded by plasma (solid lines).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For immediate comparison, we also plotted the Schwarzschild and RN cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Furthermore, while it is understood that the shadow cast by a non-spinning black hole is a circle, we can see in the 0 5 10 15 20 25 robs/m 1 0 1 2 3 4 5 6 Rsh/m Schw RN, Q = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='25m a = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='25, = 10 1 a = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='50 a = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='75 a = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='00 a = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='25, = 0 a = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='50 a = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='75 a = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='00 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='0 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='8 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='2 FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' behavior of the shadow radius is due to a static observer with varying location from the black bounce RN BH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Here, we used Q = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='25m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' plot the behavior of the shadow radius.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' First, without the bounce parameter, the RN case with Q = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='25m decreases the shadow radius while following the general trend of the Schwarzschild case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For the Schwarzschild case, Rsh = 3 √ 3m as robs becomes larger, but at robs = 25m, we observe that it is still lower than this value of Rsh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The effect of the bounce parameter is to produce lower values of Rsh and make its rate of change with respect to r → 0 at lower values of robs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' It means that the observer does not need to go so far away to observe a constant shadow radius.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Furthermore, the bounce parameter allows the formation of the shadow near the event horizon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' With the plasma medium δ = 10−1, we observe that robs follows the general trend for δ = 0, but increases the shadow slightly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Such an increase depends on the value of the plasma parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For completion, let us analyze the effect of the dark matter refractive index n(ω) in Equation (16) instead of the plasma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The photonsphere can be found via n(ω)2[h′(r)A(r) − A′(r)h(r)] = 0, (42) 13 which reveals that the photonsphere radius is independent of the dark matter parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' It is easy to see that through the orbit equation, one can verify that the critical impact parameter in this case is b2 crit = 2n(ω)2h(rph)3 h(rph) − m .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (43) Then, the shadow is given by Rsh = n(ω)h(rph) �2h(rph)(h(robs)2 − 2mh(robs) + Q2) h(robs)2(h(rph) − m) �1/2 , (44) where the shadow radius is increased by a factor of n(ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' VIII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' CONCLUSION In this work, we have discussed the Reissner–Nordstr¨om BH corrected by bounce parameter and its properties, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=', the curvature singularities are absent from the black bounce family on a global scale and satisfy all observable weak field tests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In the case of plasma and DM mediums, the attained bending angle Equation (15) depended on the mass m, charge Q of the BH, bounce parameter a, impact parameter, and medium’s parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' It is noted that in bending angle Equation (15) the terms without the bounce parameter a and which contain charge are due to the charged nature of the BH and the remaining terms are due to the corrections with the bounce parameter a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' It is also found that the effect of the plasma increases the deflection angle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The bending angle is inversely proportional to the photon frequency, so the bending angle increases by lowering the photon frequency and assuming the electron frequency is fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' It is to be observed that black bounce Reissner–Nordstr¨om BH’s bending angle increases due to the DM medium effect compared to the vacuum case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In above both mediums, we have examined that in the absence of a bounce parameter, one can obtain the bending angle of Reissner–Nordstr¨om BH and the bending angle of Schwarzschild’s BH by neglecting the charge and bounce parameter of the BH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Moreover, one can attain the bending angle of a non-plasma medium by taking ωe = 0 in a plasma medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' It is noticed that also that, by ignoring the DM effect in the angle of deflection in Equation (15), the angle reduces to the angle of a non-plasma medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We also observed that the obtained bending angle in both mediums is directly proportional to the mass m, charge Q, bounce parameter a, and inversely proportional to the impact parameter b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Following that, we have analyzed the graphical behavior of the bending angle ˜α with respect to the b for fixed values of mass m, charge Q, ωe ω∞ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='1 and varying the value of bounce parameter a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We have examined that for at the small values of impact parameter b, the value of the bending angle ˜α is maximum and as the value of b increases, the bending angle ˜α exponentially decreases and approaches to zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Moreover, we have investigated the deflection angle ˜α with respect to the impact parameter b by fixing m, Q, taking a = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5, 1 and varying the plasma factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' For Q = a = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='5, 1, we have studied that the deflection angle ˜α decreases exponentially and almost approaches zero as the value of impact parameter b goes to infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' In all the above cases, graphically we have observed that the deflection angle ˜α shows the inverse relationship with the impact parameter b and also the behavior of angle is physically stable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Furthermore, we have computed the Hawking temperature using a topological method involving two invariants, namely the two-dimensional Euler characteristic and the GBT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We have examined that the obtained expression of the Hawking temperature TH in Equation (22) black bounce Reissner–Nordstr¨om BH depends on the mass m, charge Q of the BH and bounce parameter a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We also noticed that the Hawking temperature expression is similar to the topological technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' It is to be mentioned here that for the case Q ̸= 0 and a = 0, the Hawking temperature Equation (22) reduces to the Hawking temperature of Reissner–Nordstr¨om BH, and for Q = a = 0, the acquired Hawking temperature converts to the Schwarzschild Hawking temperature, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=', TH = 1 8mπ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Furthermore, we graphically investigated that the Hawking temperature decreases exponentially.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We have also calculated the greybody bound Tb and examined that the bound Tb of the black bounce Reissner–Nordstr¨om BH depends on the mass m, charge Q of the BH, and bounce parameter a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Moreover, we have observed that the potential V increases and attains its maximum value for l = 1, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' As the value of a increases, the potential exponentially decreases and approaches zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' It is to be mentioned here that when r → 0, one can attain the high value of potential, and for the large value of r the potential approaches zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' It is observed that the corresponding bound becomes lower as the value of a increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Furthermore, it is examined that the greybody factor’s bound exhibits the convergent behavior and converges to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' We also observed that for large values of a and small r, the potential is higher, making it difficult for the waves to pass through that potential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Finally, we also explored the effect of the bounce parameter on the behavior of the shadow radius and when it is surrounded by plasma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' First, the effect of the bounce parameter is to allow shadow formation closer to the black hole shadow and at a larger radius than the Schwarzschild or RN cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Here, the rate at which the shadow increases is also larger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Moreover, we observe that the bounce parameter quickly makes the shadow radius rate of change tend to zero even at low values of robs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Finally, the effect of the plasma is just to increase the shadow radius of the black hole affected by the bounce parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' These parameters can indeed change the shadow radius, which sophisticated astronomical devices can detect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 14 ACKNOWLEDGMENTS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' ¨O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' would like to acknowledge networking support by the COST Action CA18108 - Quantum gravity phenomenology in the multi-messenger approach (QG-MM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [1] Albert Einstein, “Lens-Like Action of a Star by the Deviation of Light in the Gravitational Field,” Science 84, 506–507 (1936).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [2] Roger Penrose, “Gravitational collapse and space-time singularities,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 14, 57–59 (1965).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [3] Stefano Ansoldi, “Spherical black holes with regular center: A Review of existing models including a recent realization with Gaussian sources,” in Conference on Black Holes and Naked Singularities (2008) arXiv:0802.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='0330 [gr-qc].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [4] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Bardeen, “Non-singular general-relativistic gravitational collapse,” Proceedings of GR5 URSS, Tbilisi , 174 (1968).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [5] Eloy Ayon-Beato and Alberto Garca, “New regular black hole solution from nonlinear electrodynamics.” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' B 464, 25–29 (1999).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [6] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Dymnikova, “Vacuum nonsingular black hole.” Gen Relat Gravit 24, 235242 (1992).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [7] Jos´e P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Lemos and Vilson T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Zanchin, “Regular black holes: Electrically charged solutions, reissner-nordstr¨om outside a de sitter core,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 83, 124005 (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [8] Arun Kumar, Sushant G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Ghosh, and Sunil D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Maharaj, “Nonsingular black hole chemistry,” Physics of the Dark Universe 30, 100634 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [9] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Eichhorn and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Held, “Image features of spinning regular black holes based on a locality principle.” Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' C 81, 933 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [10] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Hawking, “Particle creation by black holes,” Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' in Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 43 (1975), 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='1007/BF02345020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [11] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Hassanabadi, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Chung, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' L¨utf¨uo˘glu, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Maghsoodi, “Effects of a new extended uncertainty principle on Schwarzschild and Reissner–Nordstr¨om black holes thermodynamics,” Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Mod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' A 36, 2150036 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [12] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Hassanabadi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Kˇr´ıˇz, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Chung, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' L¨utf¨uo˘glu, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Maghsoodi, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Hassanabadi, “Thermodynamics of the Schwarzschild and Reissner–Nordstr¨om black holes under higher-order generalized uncertainty principle,” Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Plus 136, 918 (2021), arXiv:2110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='01363 [gr-qc].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [13] Hao Chen, Bekir Can L¨utf¨uo˘glu, Hassan Hassanabadi, and Zheng-Wen Long, “Thermodynamics of the Reissner-Nordstr¨om black hole with quintessence matter on the EGUP framework,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' B 827, 136994 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [14] Shao-Wen Wei Zhang, Yu-Peng and Yu-Xiao Liu, “Topological approach to derive the global Hawking temperature of (massive) BTZ black hole.” Physics Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' B 810 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [15] Ali ¨Ovg¨un and Izzat Sakalli, “Hawking radiation via gaussbonnet theorem,” Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' of Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 413, 168071 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [16] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' I Kruglov, “Magnetically charged black hole in framework of nonlinear electrodynamics model.” Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' of Mod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' A 33 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [17] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Fernando, “Greybody factors of charged dilaton black holes in 2 + 1 dimensions.” Gen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Relativ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Gravit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 37, 461481 (2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [18] Wontae Kim and John J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Oh, “Greybody factor and hawking radiation of charged dilatonic black holes,” J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' of the Korean Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Society 52, 986–991 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [19] Jorge Escobedo, “Greybody factors,” Master’s Thesis, Uni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' of Amsterdam 6 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [20] Maulik K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Parikh and Frank Wilczek, “Hawking radiation as tunneling,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 85, 5042–5045 (2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [21] Chris H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Fleming, “Hawking radiation as tunneling,” Uni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' of Maryland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Dept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' of Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=', Tech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rep (2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [22] Matt Visser, “Some general bounds for one-dimensional scattering,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' A 59, 427–438 (1999).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [23] Petarpa Boonserm and Matt Visser, “Bounding the greybody factors for schwarzschild black holes,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 78, 101502 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [24] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Javed, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Hussain, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' ¨Ovg¨un, “”Weak deflection angle of KazakovSolodukhin black hole in plasma medium using GaussBonnet theorem and its greybody bonding.” Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Plus 137 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [25] Hoekstra, Henk and others, “Masses of galaxy clusters from gravitational lensing.” Space Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 177, 75–118 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [26] Brouwer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' and others, “Studying galaxy troughs and ridges using weak gravitational lensing with the Kilo-Degree Survey.” , Mon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Roy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Astron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Soc 481, 5189 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [27] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Ali Vanderveld, Michael J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Mortonson, Wayne Hu, and Tim Eifler, “Testing dark energy paradigms with weak gravitational lensing,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 85, 103518 (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [28] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Keeton, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Kochanek, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Falco, “The optical properties of gravitational lens galaxies as a probe of galaxy structure and evolution,” The Astrophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 509, 561–578 (1998).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [29] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Bhadra, “Gravitational lensing by a charged black hole of string theory,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 67, 103009 (2003).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [30] Richard Whisker, “Strong gravitational lensing by braneworld black holes,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 71, 064004 (2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [31] Songbai Chen and Jiliang Jing, “Strong field gravitational lensing in the deformed hoˇrava-lifshitz black hole,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 80, 024036 (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [32] Kamal K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Nandi, Yuan-Zhong Zhang, and Alexander V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Zakharov, “Gravitational lensing by wormholes,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 74, 024020 (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [33] Ernesto F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Eiroa, Gustavo E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Romero, and Diego F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Torres, “Reissner-nordstr¨om black hole lensing,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 66, 024010 (2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [34] Yashmitha Kumaran and Ali ¨Ovg¨un, “Weak Deflection Angle of Extended Uncertainty Principle Black Holes,” Chin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' C 44, 025101 (2020), arXiv:1905.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='11710 [gr-qc].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [35] Yashmitha Kumaran and Ali ¨Ovg¨un, “Deflection angle and shadow of the reissner-nordstr¨om black hole with higher-order magnetic correction in einstein-nonlinear-maxwell fields,” Symmetry 14 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [36] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Werner, “Gravitational lensing in the kerr-randers optical geometry.” Gen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Relat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' and Gravi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 44, 3047 (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [37] Asahi Ishihara et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=', “Gravitational bending angle of light for finite distance and the gauss-bonnet theorem,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 94, 084015 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [38] Asahi Ishihara, Yusuke Suzuki, Toshiaki Ono, and Hideki Asada, “Finite-distance corrections to the gravitational bending angle of light in 15 the strong deflection limit,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 95, 044017 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [39] Toshiaki Ono, Asahi Ishihara, and Hideki Asada, “Gravitomagnetic bending angle of light with finite-distance corrections in stationary axisymmetric spacetimes,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 96, 104037 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [40] Gabriel Crisnejo and Emanuel Gallo, “Weak lensing in a plasma medium and gravitational deflection of massive particles using the gauss-bonnet theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' a unified treatment,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 97, 124016 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [41] Zonghai Li and Ali ¨Ovg¨un, “Finite-distance gravitational deflection of massive particles by a kerr-like black hole in the bumblebee gravity model,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 101, 024040 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [42] Zonghai Li, Guodong Zhang, and Ali ¨Ovg¨un, “Circular orbit of a particle and weak gravitational lensing,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 101, 124058 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [43] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Oort, “The force exerted by the stellar system in the direction perpendicular to the galactic plane and some related problems,” Astron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Inst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Netherlands 6, 249 (1932).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [44] Fritz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Zwicky, “On the masses of nebulae and of clusters of nebulae.” The Astrophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 86, 217 (1937).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [45] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Feng, “Dark Matter Candidates from Particle Physics and Methods of Detection.” The Astrophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Supple.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 48, 495–545 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [46] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Jarosik et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=', Astrophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=', Suppl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 192, 14 (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [47] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' ¨Ovg¨un, “Deflection angle of photons through dark matter by black holes and wormholes using gaussbonnet theorem,” Universe 5, 115 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [48] Reggie C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Pantig and Ali ¨Ovg¨un, “Dark matter effect on the weak deflection angle by black holes at the center of Milky Way and M87 galaxies,” Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' C 82, 391 (2022), arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='03365 [gr-qc].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [49] Reggie C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Pantig and Emmanuel T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rodulfo, “Weak deflection angle of a dirty black hole,” Chin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 66, 691–702 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [50] Reggie C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Pantig and Ali ¨Ovg¨un, “Black hole in quantum wave dark matter,” Fortsch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 2022, 2200164 (2022), arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='00523 [gr-qc].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [51] Reggie C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Pantig and Ali ¨Ovg¨un, “Dehnen halo effect on a black hole in an ultra-faint dwarf galaxy,” JCAP 08, 056 (2022), arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='07404 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='GA].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [52] Kazunori Akiyama et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (Event Horizon Telescope), “First M87 Event Horizon Telescope Results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The Shadow of the Supermassive Black Hole,” Astrophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 875, L1,17 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [53] Kazunori Akiyama et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' (Event Horizon Telescope), “First Sagittarius A* Event Horizon Telescope Results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' The Shadow of the Supermassive Black Hole in the Center of the Milky Way,” Astrophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 930, L12 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [54] James M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Bardeen, William H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Press, and Saul A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Teukolsky, “Rotating Black Holes: Locally Nonrotating Frames, Energy Extraction, and Scalar Synchrotron Radiation,” Astrophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 178, 347–370 (1972).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [55] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Synge, “The Escape of Photons from Gravitationally Intense Stars,” Mon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Roy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Astron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 131, 463–466 (1966).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [56] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Luminet, “Image of a spherical black hole with thin accretion disk,” Astron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Astrophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 75, 228–235 (1979).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [57] Ramesh Narayan, Michael D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Johnson, and Charles F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Gammie, “The shadow of a spherically accreting black hole,” The Astrophysical Journal 885, L33 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [58] Yang Guo and Yan-Gang Miao, “Charged black-bounce spacetimes: Photon rings, shadows and observational appearances,” Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' B 983, 115938 (2022), arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='01747 [gr-qc].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [59] Reggie C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Pantig, Paul K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Yu, Emmanuel T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rodulfo, and Ali ¨Ovg¨un, “Shadow and weak deflection angle of extended uncertainty principle black hole surrounded with dark matter,” Annals Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 436, 168722 (2022), arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='04304 [gr-qc].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [60] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Konoplya and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Zhidenko, “Solutions of the Einstein Equations for a Black Hole Surrounded by a Galactic Halo,” Astrophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 933, 166 (2022), arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='02205 [gr-qc].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [61] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Konoplya, “Shadow of a black hole surrounded by dark matter,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' B 795, 1–6 (2019), arXiv:1905.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='00064 [gr-qc].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [62] Zhaoyi Xu, Xian Hou, Xiaobo Gong, and Jiancheng Wang, “Black Hole Space-time In Dark Matter Halo,” JCAP 09, 038 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [63] Zhaoyi Xu, Xiaobo Gong, and Shuang-Nan Zhang, “Black hole immersed dark matter halo,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 101, 024029 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [64] Reggie C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Pantig and Emmanuel T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rodulfo, “Rotating dirty black hole and its shadow,” Chin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 68, 236–257 (2020), arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='06829 [gr-qc].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [65] Wajiha Javed, Hafsa Irshad, Reggie C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Pantig, and Ali vgn, “Weak deflection angle by kalb-ramond traversable wormhole in plasma and dark matter mediums,” Universe 8 (2022), 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='3390/universe8110599.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [66] Wajiha Javed, Sibgha Riaz, Reggie C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Pantig, and Ali ¨Ovg¨un, “Weak gravitational lensing in dark matter and plasma mediums for wormhole-like static aether solution,” Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' C 82, 1057 (2022), arXiv:2212.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='00804 [gr-qc].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [67] Kimet Jusufi, Mubasher Jamil, and Tao Zhu, “Shadows of Sgr A∗ black hole surrounded by superfluid dark matter halo,” Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' C 80, 354 (2020), arXiv:2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='05299 [gr-qc].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [68] Sourabh Nampalliwar, Saurabh Kumar, Kimet Jusufi, Qiang Wu, Mubasher Jamil, and Paolo Salucci, “Modeling the Sgr A* Black Hole Immersed in a Dark Matter Spike,” Astrophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 916, 116 (2021), arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='12439 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='HE].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [69] Alex Simpson Berry, Thomas and Matt Visser, “General class of” quantum deformed” regular black holes,” Universe 7 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [70] Hyat Huang and Jinbo Yang, “Charged ellis wormhole and black bounce,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 100, 124063 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [71] Edgardo Franzin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=', “Charged black-bounce spacetimes,” J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Cosmo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' and Astropart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 07 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [72] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Morris and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Thorne, “Wormholes in space-time and their use for interstellar travel: A tool for teaching general relativity,” Am.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 56, 395–412 (1988).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [73] Sean A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Hayward, “Formation and evaporation of nonsingular black holes,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 96, 031103 (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [74] Yang Guo and Yan-Gang Miao, “Bounce corrections to gravitational lensing, quasinormal spectral stability and gray-body factors of Reissner-Nordstr¨om black holes,” (2022), arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='02971 [gr-qc].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [75] Francisco S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Lobo, Manuel E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rodrigues, Marcos V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' de S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Silva, Alex Simpson, and Matt Visser, “Novel black-bounce spacetimes: Wormholes, regularity, energy conditions, and causal structure,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 103, 084052 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [76] Alex Simpson, Prado Mart´ın-Moruno, and Matt Visser, “Vaidya spacetimes, black-bounces, and traversable wormholes,” Classical and Quantum Gravity 36, 145007 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 16 [77] G W Gibbons and M C Werner, “Applications of the gaussbonnet theorem to gravitational lensing.” Class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Quantum Grav.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 25, 235009 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [78] David C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Latimer, “Dispersive light propagation at cosmological distances: Matter effects,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 88, 063517 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [79] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Gibbons and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Hawking, “Cosmological event horizons, thermodynamics, and particle creation,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 15, 2738–2751 (1977).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [80] Petarpa Boonserm and Matt Visser, “Bounding the bogoliubov coefficients,” Annals of Physics 323, 2779–2798 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [81] Boonserm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' P, “Rigorous bounds on transmission, reflection and bogoliubov coefficients,” Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' thesis, Victoria Uni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Wellington (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [82] Tritos Ngampitipan and Petarpa Boonserm, “Bounding the greybody factors for non-rotating black holes,” J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' of Mod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 22, 1350058 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [83] Volker Perlick, Oleg Yu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Tsupko, and Gennady S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Bisnovatyi-Kogan, “Influence of a plasma on the shadow of a spherically symmetric black hole,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' D 92, 104031 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' [84] Reggie C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' Pantig and Ali ¨Ovg¨un, “Testing dynamical torsion effects on the charged black hole’s shadow, deflection angle and greybody with M87* and Sgr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' A* from EHT,” Annals Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content=' 448, 169197 (2023), arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} +page_content='02161 [gr-qc].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9AzT4oBgHgl3EQf5P4V/content/2301.01855v1.pdf'} diff --git a/j9FPT4oBgHgl3EQf1jVQ/content/tmp_files/2301.13183v1.pdf.txt b/j9FPT4oBgHgl3EQf1jVQ/content/tmp_files/2301.13183v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..1c4e800ee524589c5de56e379c22237ba773d0d0 --- /dev/null +++ b/j9FPT4oBgHgl3EQf1jVQ/content/tmp_files/2301.13183v1.pdf.txt @@ -0,0 +1,1157 @@ +Learning Control from Raw Position Measurements +Fabio Amadio1, Alberto Dalla Libera1, Daniel Nikovski2, Ruggero Carli1, Diego Romeres2 +Abstract— We propose a Model-Based Reinforcement Learn- +ing (MBRL) algorithm named VF-MC-PILCO, specifically +designed for application to mechanical systems where veloc- +ities cannot be directly measured. This circumstance, if not +adequately considered, can compromise the success of MBRL +approaches. To cope with this problem, we define a velocity- +free state formulation which consists of the collection of past +positions and inputs. Then, VF-MC-PILCO uses Gaussian +Process Regression to model the dynamics of the velocity- +free state and optimizes the control policy through a particle- +based policy gradient approach. We compare VF-MC-PILCO +with our previous MBRL algorithm, MC-PILCO4PMS, which +handles the lack of direct velocity measurements by modeling +the presence of velocity estimators. Results on both simu- +lated (cart-pole and UR5 robot) and real mechanical systems +(Furuta pendulum and a ball-and-plate rig) show that the +two algorithms achieve similar results. Conveniently, VF-MC- +PILCO does not require the design and implementation of state +estimators, which can be a challenging and time-consuming +activity to be performed by an expert user. +I. INTRODUCTION +Model-Based Reinforcement Learning (MBRL) [1] proved +to be a promising strategy to overcome the challenges of +delivering Reinforcement Learning (RL) [2] solutions to real- +world problems. In fact, standard Model-Free RL algorithms +usually require a massive amount of interaction with the +systems to solve the assigned task. This requirement might +be unfeasible in many real-world applications, e.g. control +of mechanical systems and robotics, due to the limited time +available and the risk of damaging the devices involved in +such a long training phase. On the other hand, MBRL uses the +collected data to train a predictive model of the environment +and updates the policy based on model simulations. With this +strategy, we are able to extract more valuable information +from the available data and increase data efficiency [3]. +Nevertheless, the effectiveness of MBRL methods strongly +depends on how accurately the trained model can simulate the +behavior of the environment. For this reason, it is necessary to +adopt stochastic models in order to capture uncertainty about +predictions. Different classes of models have been employed, +from Gaussian Processes (GPs) [4] in [5], [6], [7], [8], to +ensembles of probabilistic deep neural networks in [9], [10]. +The application of this kind of method to real-world +environments is affected by another major problem: the full +state of a real system is often only partially measurable. +1 Fabio Amadio, Alberto Dalla Libera, and Ruggero Carli are with the +Department of Information Engineering, University of Padova, Via Gradenigo +6/B, 35131 Padova, Italy [fabio.amadio@phd.unipd.it, dallaliber@dei.unipd.it, +carlirug@dei.unipd.it]. +3 Diego Romeres and Daniel Nikovski are with Mitsubishi Electric +Research Laboratories (MERL), Cambridge, MA 02139 [romeres@merl.com, +nikovski@merl.com] . +For instance, when dealing with mechanical systems, joint +positions can be measured by means of proper sensors, e.g. +encoders, while velocities can only be estimated from the +history of sampled positions. In our previous work [11], +we proposed an MBRL algorithm, called MC-PILCO4PMS, +specifically tailored to deal with Partially Measurable Systems +and take correctly into account the presence of online and +offline state observers. It proved able to robustly learn from +scratch how to control mechanical systems, in both simulated +and real environments even when the velocity is not directly +measurable. However, the tuning of accurate filters and +state estimators could be particularly challenging and time- +consuming for systems affected by significant noise. +In this work, we present an alternative approach, called +VF-MC-PILCO, that completely circumvents the necessity +of estimating velocities, by working only with the history of +measured positions and applied control actions. We adopted +a Velocity-Free (VF) predictive model, similar to the one +proposed in [12], [13], together with a control policy whose +input depends only on positions. Compared to the works +in [12], [13], which were focused on the modeling part, +in this work we propose a complete VF solution to the +RL problem. The proposed method is first tested in two +simulated systems with an increasing number of DoF i.e., +a cart-pole and a 6 DoF UR5 robot. Then, VF-MC-PILCO +is tested in two real systems with an increasing level of +difficulty for the velocity estimation i.e., a Furuta pendulum +equipped with encoders and a ball-and-plate system equipped +with an external camera to infer the ball positions. VF-MC- +PILCO correctly solved all the tasks with performance similar +to the one obtained by MC-PILCO4PMS. To solve such +tasks, MC-PILCO4PMS must accurately reproduce the online +filter employed inside the policy optimization algorithm and +implement an offline filtering procedure for estimating the +velocities needed for modeling. On the contrary, VF-MC- +PILCO, by working directly with raw measurements, presents +an alternative way that requires less effort and expertise from +the user, obtaining similar performance despite the presence +of significant noise. The comparisons are carried against MC- +PILCO4PMS because this algorithm was shown to outperform +other s.o.t.a. MBRL algorithms in [11]. +The remainder of this paper is structured as follows. In +Sec. II, we formulate the problem we aim to solve, as well +as describe the use of GPs for modeling. In Sec. III, we +detail the proposed algorithm, VF-MC-PILCO. Section IV +illustrates the validation conducted on the simulated cart-pole +benchmark. Section V reports the experiment performed on +a simulated UR5 robot to test VF-MC-PILCO capacity of +handling systems up to 6 DoF. Section VI shows the results +arXiv:2301.13183v1 [cs.RO] 30 Jan 2023 + +of the experiments with the two real mechanical systems. +Finally, we draw conclusions in Section VII. +II. BACKGROUND +In this section, we first introduce the problem of MBRL +on real mechanical systems. Then, we briefly discuss how +Gaussian Process Regression (GPR) is usually used for +modeling purposes. +A. Problem Formulation +Consider a mechanical system with dq degrees of freedom, +and denote by xt its state at time t. Typically, xt is defined +as xt “ rqT +t , 9qT +t sT , where qt P Rdq and 9qt P Rdq are, +respectively, the vector of joint positions and velocities. +Assume that only qt can be directly measured, whereas 9qt is +not directly measurable, but instead must be estimated. We +argue that this is a common scenario as mechanical systems +are often equipped with position sensors such as encoders, +but lack velocity sensors. More accurate velocity estimates +can be obtained from the history of position measurements, +exploiting both past as well as future samples. These kinds +of estimation techniques are intrinsically acausal, hence they +can be performed offline and used only for modeling. Thus, +controllers must rely on causal online estimates, which are +usually less accurate and affected by delays. +The discrete-time dynamics of the system is given by +xt`1 “ fpxt, utq `wt, where fp¨q is an unknown transition +function, ut P Rdu represents the control action, and wt „ +Np0, Σwq models uncertainty. RL algorithms aim to learn +how to accomplish a given task based on interaction data. +The task is encoded by a cost function cpxtq, defined to +characterize the immediate penalty associated with being in +state xt. Control actions are chosen from a policy u “ πθpxq, +parameterized by θ. Then, the objective is to find the policy +that minimizes the expected cumulative cost over T time +steps, starting from the initial state distribution ppx0q, i.e., +Jpθq “ +Tÿ +t“0 +Ext rc pxtqs . +(1) +An MBRL algorithm consists, generally, of the succession +of several trials, i.e. attempts to solve the desired task, and +each trial is structured in three main phases: +‚ Model Learning: data collected during previous interac- +tions are used to learn a model of the system dynamics. +At the beginning, first data are collected by applying an +exploratory policy, e.g. random control actions; +‚ Policy Update: the control policy is optimized in order +to minimize an estimate of the cost Jpθq obtained by +exploiting the trained model; +‚ Policy Execution: the updated control policy is applied +to the system and interaction data are stored. +In order to comply with the common conditions in real +mechanical systems described above, we propose an MBRL +algorithm to control mechanical systems without assuming +to have neither measurements nor estimations of velocities. +B. GPR for Model Learning +Given a data set of state-action pairs measured during the +interactions with the system, it is possible to use GPR to +train a probabilistic model that approximates the unknown +transition function fp¨q. A common strategy in the literature +[14], [15] is to model the evolution of each state dimension +with a distinct zero mean GP. Let us indicate with xpiq the +i-th component of the state, for i P t1, . . . , dxu (dx is the +dimension of the state vector), and define ˜xt “ rxT +t , uT +t sT . +The i-th GP takes ˜xt as input, and predicts xpiq +t`1 ´ xpiq +t . The +GPs are completely characterized by their kernel functions +that represent our belief on the a priori covariance. A common +choice is the Squared Exponential (SE) kernel, +kSEp˜xtj, ˜xtkq :“ λ2e´||˜xtj ´˜xtk ||2 +Λ´1 . +(2) +Given a data set of state-action pairs D, the GPs provide +a closed form expression of ppxt`1|˜xt, Dq, the posterior +distribution of the estimated state at time t ` 1. For further +details about GPR and its application to dynamical system +modeling, the readers can refer to [4]. +III. VELOCITY-FREE MC-PILCO +Here we present the algorithm VF-MC-PILCO (Velocity +Free Monte Carlo Probabilistic Inference for Learning +COntrol), whose objective is to solve the problem defined +in Sec. II-A without the need of performing any kind of +velocity estimation. In fact, tuning effective estimators may be +a tedious and complex operation, especially in the presence of +high measurement noise. This might significantly compromise +the MBRL algorithm, if not duly considered. VF-MC-PILCO +circumvents these issues by adopting a VF formulation. +Inspired by [12], [13], we consider a VF model of the system +dynamics, given in the following general form +qt`1 “ fdfpqt, qt´1, . . . , qt´mq, ut, . . . , ut´muq. +(3) +The joint positions at the next time step are predicted based +on the history of the past positions, from t up to t ´ mq, and +the history of applied control actions, from t up to t ´ mu. +Let mq and mu be called, respectively, the position memory +and the control memory of the VF model. In this new VF +framework, it is convenient to redefine the state of the system +as xt “ rqT +t , . . . , qT +t´mq, uT +t´1, . . . , uT +t´musT . +In the following, we present the model learning and policy +update phases of the VF-MC-PILCO algorithm, detailing how +they have been adapted to the new VF formulation. +A. VF Model Learning +We employ the GPR framework of Sec. II-B, but instead +of considering a full state representation with velocities, +we train a VF GP model of form (3). Let us denote with +qpiq +t +the position of the i-th joint at time t, and define +∆piq +qt “ qpiq +t`1 ´qpiq +t , for i P t1, . . . , dqu. The evolution of ∆piq +qt +for all i is modeled using a distinct GP, whose input depends +upon rqT +t , . . . , qT +t´mq, uT +t , uT +t´1, . . . , uT +t´musT . Trivially, the +transition functions of qt´1, . . . , qt´mq, ut´1, . . . , ut´mu +are deterministic and known. + +Experimentally, we found it beneficial in terms of data +efficiency (details in Sec. IV-C) to rearrange GP input as +˜xt “ +” +qT +t , ∆T +qt´1, . . . , ∆T +qt´mq , uT +t , . . . , uT +t´mu +ıT +, +(4) +where ∆qt´i “ pqt´i`1´qt´iq, for i “ 1, . . . , mq, following +the same notation used before when defining the GP targets. +In this way, we are providing the model with additional +information about the rates of change observed inside the +considered past position memory interval mq. Depending on +the considered application, it may be convenient to further +modify the GP input vector w.r.t. (4), in order to exploit +particular characteristics of the considered quantities. For +instance, we applied a sin-cos expansion to angular quantities +during some of the experiments presented in the next sections. +B. VF Particle-Based Policy Gradient +The GP-based VF predictive model of Sec. III-A is now +employed to optimize the policy parameters θ following +a particle-based policy gradient strategy. VF-MC-PILCO +computes ˆJpθq, an approximation of Jpθq in (1) exploiting +the posterior distribution ppxt`1|˜xt, Dq defined by the GPs. +Finally, it updates the θ with a gradient-based procedure. +The computation of ˆJpθq entails the simulation of the +effects of the policy πθ on M independent state particles +by cascading the one-step-ahead stochastic predictions. In +particular, let qpmq +t +, for m “ 1, . . . , M, represent the position +of the M state particles simulated by the VF GP model. +Starting positions are sampled from a given distribution +ppq0q. We assume that the system is not moving at t “ 0, +i.e., qpmq +0 +“ qpmq +´1 “ ¨ ¨ ¨ “ qpmq +´mq. At each time step t, in +order to simulate the presence of measurement error, we +corrupt the particle positions qpmq +t +with a fictitious noise +epmq +t +, e.g. a zero mean Gaussian i.i.d. noise, obtaining a +set of simulated measurements ¯qpmq +t +“ qpmq +t +` epmq +t +. Then, +for each particle, the policy πθ selects the next control +actions upmq +t +according to the history of the simulated +measurements, ¯xpmq +t +“ r¯qpmqT +t +, . . . , ¯qpmqT +t´mqsT . Finally, the +M positions at the next time step, t ` 1, are simulated +by forward sampling from the distributions derived by the +VF GP model ppxpmq +t`1|˜xpmq +t +, Dq (for m “ 1 . . . M) with +˜xpmq +t +defined for each particle m as in (4). This procedure +is iterated for T time steps, obtaining M different particle +trajectories ttxpmq +t +uM +m“1uT +t“0, that simulate the results of the +policy. The particle generation procedure is depicted in the +block scheme of Fig. 1. The sample mean of the costs incurred +by the different particles provides an estimate of the expected +cumulative cost, namely +ˆJpθq “ +Tÿ +t“0 +˜ +1 +M +M +ÿ +m“1 +c +´ +xpmq +t +¯¸ +. +(5) +The computational graph resulting from (5) allows us to +compute ∇θ ˆJpθq, i.e., the gradient of ˆJpθq w.r.t. θ, through +backpropagation, exploiting the reparametrization trick [16], +[17] to propagate the gradient through the stochastic oper- +ations. Finally, a stochastic gradient descent algorithm, e.g. +Adam [18], can exploit the estimated gradient to update θ. +Policy +VF GP +Model ++ +Fig. 1: VF-MC-PILCO particles generation block schemes. +C. Policy structure +We considered an RBF network policy with outputs limited +by a hyperbolic tangent function, properly scaled. We call +this function squashed-RBF-network, and it is expressed as +πθpx˚q “ umax tanh +˜ +1 +umax +nb +ÿ +i“1 +wie||ai´x˚||2 +Σπ +¸ +, +(6) +The input vector of the policy is defined as +x˚ +t “ +” +qT +t , ∆T +qt´1, . . . , ∆T +qt´mq +ıT +, +(7) +where we are providing the policy with the same consecutive +differences of position measures used for GP input in (4). +The policy parameters are θ “ tw, A, Σπu, where w “ +rw1 . . . wnbs and A “ ta1 . . . anbu are, respectively, the +weights and the centers of the nb basis functions, while Σπ is +a diagonal matrix that determines theirs shapes. The maximum +control umax is constant and depends on the application. It +is worth mentioning that VF-MC-PILCO is not restricted to +this particular choice of policy function. +IV. SIMULATED EXPERIMENT: CART-POLE SWING-UP +As a preliminary validation, we tested VF-MC-PILCO on a +simulated cart-pole swing-up task to analyze its performance +under different setups. We compare the proposed approach +with the s.o.t.a. MBRL algorithm specifically designed to +deal with partial state measurability of real mechanical +systems, MC-PILCO4PMS [11]. MC-PILCO4PMS follows a +particle-based policy gradient framework similar to the one +depicted in Sec. III-B, but, differently from the proposed VF- +MC-PILCO, it works with velocity estimates by simulating +not only the evolution of the system state but also the +evolution of the estimated state, which entails modeling +the measurement system and the implemented online filters. +Notice that the implementation of MC-PILCO4PMS could +be in some cases complex or time-consuming, due to its +requirement to reproduce the online filtering procedure inside +the policy optimization phase, and the need to adopt a different +offline filter for model learning. This is the limitation that the +proposed method aims to solve. Both algorithms have been +implemented in Python1, exploiting the PyTorch library [19]. +1https://www.merl.com/research/license/MC-PILCO + +Now, let us briefly describe the characteristics of the +simulated scenario. Let pt and αt be, respectively, the position +of the cart and the angle of the pole at time step t, hence +qt “ rpt, αtsT . The target configurations corresponding to +the pendulum swing-up are given by pdes “ 0 [m] and +|αdes| “ π [rad]. The cart-pole starts from θ0 “ 0 [rad] and +p0 “ 0 [m]. The control action is the force applied to the +cart, and the system is controlled at 30 [Hz]. We considered +a Gaussian measurement noise with standard deviation of +10´3 [m] for positions and 2 ¨ 10´3 [rad] for angles. +The GPs of the VF model are equipped with the SE kernel +described in (2). The policy adopted is a squashed-RBF- +network policy with nb “ 200 basis functions and umax “ 10 +[N]. The number of particles is set to M “ 400 during policy +optimization. In order to avoid singularities due to the angles, +we replaced, in both the model inputs ˜xt defined in (4) +and policy input x˚ defined in (7), occurrences of αt with +sinpαtq and cospαtq. Exploration data were collected by +random actions, obtained by filtering Gaussian white noise +with cut-off frequency 1.5 [Hz]. The cost function is +cpxtq “ 1 ´ exp +˜ +´ +ˆ|αt| ´ π +lα +˙2 +´ +ˆpt +lp +˙2¸ +, +(8) +where lα “ 3 and lp “ 1 define the shape of cp¨q. The +absolute value on αt is needed to allow different swing-up +solutions to both the equivalent target pole angles π and ´π. +The objective is to analyze different VF-MC-PILCO +configurations and compare their performance with the results +obtained by MC-PILCO4PMS, as a benchmark. We analyzed +the results obtained in 50 distinct experiments, consisting of +5 trials of length 3 seconds, varying the random seed each +time. In this way, it is possible to evaluate the robustness of +the algorithm to different exploration trajectories and policy +initialization, as well as to different noise realizations. In +particular, we investigate the effects that different position +and control memories, mq and mu, have on modeling and +policy learning. We studied four different VF-MC-PILCO +configurations, choosing the value of mq between 1 and 2, +and mu between 0 and 1. In the following, we will refer to +these different alternatives with the symbol V F mu +mq . +A. Modeling results +We compared the accuracy of the different VF GP models +by looking at the absolute values of the prediction errors +observed on the data registered at the last trial in all 50 +experiments. Models were trained using all the data collected +up to that trial. Fig. 2 reports the results by means of box plots, +showing median values, confidence intervals, and outliers. The +results show that the presence of input history, ut´1, as part of +the GP inputs is beneficial, and within this choice, see models +with mu “ 1, the best results are obtained by V F mu“1 +mq“2 . On +the other hand, the greater errors and the significant number +of outliers obtained by models with mu “ 0 seem to indicate +that these kinds of setups are not fully capable of fitting +the registered position changes. Also, it appears that using a +longer position memory leads to an improvement of prediction +accuracy only when the control memory is mu “ 1. We can +0.000 +0.005 +0.010 +0.015 +|error| p [m] +VFmu = 0 +mq = 1 +VFmu = 0 +mq = 2 +VFmu = 1 +mq = 1 +VFmu = 1 +mq = 2 +0.00 +0.01 +0.02 +0.03 +0.04 +|error| α [rad] +Fig. 2: Absolute p and α prediction errors obtained by different +VF GP models at trial 5 in the simulated cart-pole experiments. +conclude that it seems beneficial to provide VF GP models +with information about past control actions (mu “ 1) for +fitting the system dynamics without relying on velocity. +B. Policy learning results +In this section, we evaluate the performance of the control +policies learned by the different VF-MC-PILCO setups and +by MC-PILCO4PMS. Notice that MC-PILCO4PMS achieved +results comparable to or better than other state-of-the-art GP- +based MBRL algorithms, see [11]. The cumulative costs and +success rates obtained at each trial in the 50 experiments +are reported in Fig. 3. In the two plots, the cumulative +cost is reported in terms of median values and confidence +intervals defined by the 5-th and 95-th percentiles. As one +would expect, the worse modeling results of V F mu“0 +mq“1 and +V F mu“0 +mq“2 lead to an unsatisfactory policy learning. These +VF-MC-PILCO setups manage to complete a successful +swing-up only in, approximately, half of the cases. On the +other hand, when using mu “ 1, VF-MC-PILCO is able to +robustly find an optimal solution for the task by trial 5. In +particular, the performance of V F mu“1 +mq“2 are almost equivalent +to the results of MC-PILCO4PMS. This result confirms the +effectiveness of the proposed method: with less information, +as we are not manually tuning any velocity estimator, VF-MC- +PILCO achieves state-of-the-art performance. For the user, +this corresponds to less effort and a more general method +without compromising significantly the performance. +C. Analysis of input vector structure +Before concluding this section, we would like to analyze +the reasons behind the decision to use (4) and (7) as GP and +policy input vectors, respectively. In this respect, we compared +the results obtained by VF-MC-PILCO with position memory +mq “ 2 and control memory mu “ 1 using two different +structures for the input vectors. The first employs directly +the history of positions and actions up to time step t as +GP input, e.g., ˜xt “ rqT +t , . . . , qT +t´mq, uT +t , . . . , uT +t´musT , +and the history of positions as policy input, e.g., x˚ +t “ +rqT +t , . . . , qT +t´mqsT . The second version is the one employed + +1 +2 +3 +4 +5 +trials +10 +20 +30 +40 +50 +60 +70 +80 +90 +cost +optimal +swing − up +VFmu = 0 +mq = 1 +VFmu = 0 +mq = 2 +1 +2 +3 +4 +5 +trials +VFmu = 1 +mq = 1 +VFmu = 1 +mq = 2 +PMS +Fig. 3: Cumulative costs registered during simulated cart-pole +experiments from the four considered VF-MC-PILCO setups and +the MC-PILCO4PMS benchmark (indicated by the shorthand PMS). +The observed success rates are presented in the table below. +Trial 1 +Trial 2 +Trial 3 +Trial 4 +Trial 5 +V F mu“0 +mq“1 +0% +14% +34% +46% +56% +V F mu“0 +mq“2 +0% +10% +18% +28% +52% +V F mu“1 +mq“1 +0% +8% +52% +86% +96% +V F mu“1 +mq“2 +0% +20% +73% +93% +100% +PMS +0% +14% +82% +98% +96% +previously with GP input and policy input defined as in (4) +and (7), respectively. To distinct the two implementations, we +labeled the first as V F mu“1 +mq“2 naive, and the second V F mu“1 +mq“2 +with position differences. We analyzed the results obtained +by these two setups in 50 distinct experiments, consisting of +7 trials of length 3 seconds, varying the random seed each +time. The obtained cumulative costs are reported in Fig. 4 in +terms of median values and 5-95 percentile ranges. +It is clear that providing information about the rate of +change of position measures, by using (4) and (7) as input +vectors, greatly improves the data efficiency of VF-MC- +PILCO algorithm. In fact, the V F mu“1 +mq“2 naive implementation +(that uses directly the history of position and controls) shows +a much slower convergence, reaching a 79% success rate +only at trial 7. On the other hand, V F mu“1 +mq“2 with position +differences is able to always find a solution by trial 5. +This result underlines the importance of the information +carried out by the differences between consecutive measured +positions. Without this information, the model needs more +data to correctly capture the dynamics of the system relying +only on positions. Through input vectors (4) and (7), we are +able to provide the model with knowledge about a sort of +velocity, without requiring any kind of filtering procedure. +V. SIMULATED EXPERIMENT: UR5 ROBOT CONTROL +The objective of this experiment is to test VF-MC-PILCO +in a more complex system with a higher DoF. We used +VF-MC-PILCO to learn a joint-space torque controller for +a UR5, a robotic manipulator with 6 DoF, simulated in +0 +1 +2 +3 +4 +5 +6 +7 +trials +10 +20 +30 +40 +50 +60 +70 +80 +cost +optimal +swing − up +VFmu = 1 +mq = 2 naive +VFmu = 1 +mq = 2 with pos. diff. +Fig. 4: Cumulative costs registered by VF-MC-PILCO with +different GP and policy input structures that include past positions +either as consecutive differences (∆qt´1, . . . , ∆qt´mq ) or directly +(qt´1, . . . , qt´mq). +MuJoCo [20], assuming to measure only joint angles and not +velocities. Measurements are perturbed by the presence of +white Gaussian noise with a standard deviation of 10´3. +Let us denote with qt P R6 the joint angles and with +ut P R6 the applied torques. Our objective is to learn a +VF control policy able to follow a desired trajectory tqr +tuT +t“1. +Let et “ qr +t ´ qt denote the position error at time t. VF- +MC-PILCO memories were set to mq “ 2 and mu “ 1, +hence the VF state of the system at time step t is defined +as xt “ rqT +t , qT +t´1, qT +t´2, uT +t´1sT . The GP input vector was +defined applying a sin-cos expansion of angular quantities as +˜xt “ rsinpqtqT , cospqtqT , ∆T +qt´1, ∆T +qt´2, uT +t , uT +t´1sT . +The policy adopted was a multi-output squashed-RBF- +network with nb “ 400 basis functions and umax “ 1 [N¨m] +for all the joints. M “ 200 particles were used during +optimization. The policy takes in input the vector x˚ +t “ +rsinpqtqT , cospqtqT , ∆T +qt´1, ∆T +qt´2, eT +t sT . Fig. 5 represents +the overall control scheme. +Policy ++ +- +UR5 +Fig. 5: VF-MC-PILCO control scheme for the simulated UR5. +In this experiment, we considered a control horizon of 4 +seconds with a sampling time of 0.02 seconds. The reference +trajectory has been calculated to make the end-effector draw +a circle in the X-Y operational space. The initial exploration +used to initialize the VF GP model is provided by a poorly- +tuned PD controller (for which we estimated velocity by +backward differentiation). We used M “ 200 of particles for +gradient estimation and considered the following cost, +cpxtq “ 1 ´ exp +` +||qr +t ´ qt||2˘ +. + +Exploration +Final Policy +100 +101 +102 +Avg. EE error [mm] +Fig. 6: Average end-effector position tracking errors obtained during +the exploratory phases and by the control policy learned at trial 3. +−700 +−650 +−600 +−550 +−500 +X [mm] +−225 +−200 +−175 +−150 +−125 +−100 +−75 +−50 +−25 +Y [mm] +Exploration +Final Policy +Reference +Fig. 7: Example of explorative and final end-effector trajectories. +The experiment was repeated 10 different times, varying the +random seed and the initial exploration trajectories, obtained +each time by using random PD gains, uniformly sampled from +KP „ Up0.5, 2q and KD „ Up0.01, 0.2q. VF-MC-PILCO +managed to learn an effective control policy by the third +trial in all the repetitions, with average positioning errors not +superior to 2 [mm]. The average end-effector tracking errors +obtained are reported in Fig. 6, where results are given by +means of box plots. Fig. 7 shows an example of exploratory +and final trajectories, taken from one of the conducted tests. +VI. EXPERIMENTS ON REAL MECHANICAL SYSTEMS +In this section, we report the results obtained by VF- +MC-PILCO when applied to real systems. In particular, we +experimented on two benchmark systems: a Furuta pendulum, +and a ball-and-plate (Figure 8)2. The objective is to compare +the performance obtained by VF-MC-PILCO in these two +setups with the results of MC-PILCO4PMS reported in [11]. +2A video of the experiments on real mechanical systems is available at +the following link https://youtu.be/Hx3Y1Ib-6Tc. +Arm +Pendulum +Base +Fig. 8: (Left) Furuta pendulum. (Right) Ball-and-plate system. +A. Furuta pendulum +The Furuta pendulum [21] is a popular nonlinear control +benchmark composed of two revolute joints and three links +(see Fig. 8, left). It is an under-actuated system as only the +horizontal joint is actuated by a DC servomotor. The two +angles are measured by optical encoders with 4096 [ppr]. The +control action is the motor voltage, and its maximum allowed +value is 10 [V]. Let the pose at time step t be qt “ rαh +t , αv +t sT , +where αh +t is the angle of the horizontal joint and αv +t the angle +of the vertical joint attached to the pendulum. The objective +is to learn how to swing-up the pendulum and stabilize it in +the upward equilibrium (αv +t “ ˘π [rad]) with αh +t “ 0 [rad], +starting from q0 “ r0, 0sT . The trial length is 3 [s] and the +system is controlled at 30 [Hz]. The cost is defined as +cpxtq “ 1 ´ exp +˜ +´ +ˆαh +t +2 +˙2 +´ +ˆ|αv +t | ´ π +2 +˙2¸ +` cbpαh +t q, +(9) +with +cbpαh +t q “ +1 +1 ` exp +` +´10 +` +´ 3 +4π ´ αh +t +˘˘ +` +1 +1 ` exp +` +´10 +` +αh +t ´ 3 +4π +˘˘. +The first part of the function in (9) aims at driving the two +angles towards the target, while cbpαh +t q penalizes solutions +that push the horizontal joint beyond a certain safety threshold. +In this scenario, we used position memory mq “ 2 and +control memory mu “ 1. We equipped the VF GP model +with an SE kernel and adopted a squashed-RBF-network with +nb “ 200 basis functions as control policy. M “ 400 particles +were simulated during policy optimization. We replaced, in +both GP inputs ˜xt and policy input x˚, occurrences of αh +t +and αv +t with their sin-cos expansion, as previously done in +the simulated cart-pole case. The exploration trajectory has +been obtained using as input a sum of ten cosine waves of +random frequencies and the same amplitudes. The presence of +quantization errors was simulated during particles generation +by corrupting predicted angles with a uniform fictitious +measurement noise Up ´π +4096, +π +4096q [rad]. + +−2 +0 +αv [rad] +0.0 +0.5 +1.0 +1.5 +2.0 +2.5 +3.0 +t [s] +−0.5 +0.0 +0.5 +1.0 +αh [rad] +Fig. 9: Real swing-up trajectory (bullets) and particles prediction +(shaded lines) obtained by VF-MC-PILCO at trial 6 of the Furuta +pendulum experiment. +VF-MC-PILCO learned how to swing-up the Furuta +pendulum at trial 6, i.e. after 18 seconds of experience. That +is the same result obtained by MC-PILCO4PMS when using +the SE kernel. Hence, the VF approach showed no particular +differences in terms of data efficiency when compared with +an approach that makes use of velocity estimates. In Fig. +9, we report the successful swing-up performed by VF-MC- +PILCO at trial 6, together with the particles predicted by +the VF GP model, simulating the effects of the same control +policy. Notice how the particles’ trajectories resemble almost +perfectly the real behaviour of the two angles. +B. Ball-and-plate +The ball-and-plate system is composed of a square plate +that can be tilted in two orthogonal directions, and a ball +that is free to roll over it (see Fig. 8, right). A camera is +placed on top of the system to track the ball and measure +its position on the plate, with a precision of one millimeter. +Let, at time t, pbx +t , by +t q be the position of the center of the +ball, while αp1q +t +and αp2q +t +are the angles of the two motors +tilting the plate. Thus, qt “ rbx +t , by +t , αp1q +t , αp2q +t sT . The drivers +of the motors allow only position control and do not provide +feedback about the motors’ angles. To keep track of them, +we defined the control actions as the difference between two +consecutive reference values sent to the motor, and we limited +the maximum input to a sufficiently small value, i.e. 4 [deg], +such that the motor controllers are able to reach the target +within the sampling time. Then, as a first approximation, the +reference angles, and the actual motor angles coincide, and +we have up1q +t +“ αp1q +t`1 ´ αp1q +t +and up2q +t +“ αp2q +t`1 ´ αp2q +t . The +objective of the experiment is to learn how to control the +motor angles in order to stabilize the ball around the center +of the plate. The trial length is 3 seconds, with a sampling +frequency of 30 [Hz]. The cost function encoding the task is +cpxtq “ 1 ´ exp p´gpxtqq , +with +gpxtq “ +ˆ bx +t +0.15 +˙2 +` +ˆ by +t +0.15 +˙2 +` +´ +αp1q +t +¯2 +` +´ +αp2q +t +¯2 +. +−0.002 +0.000 +0.002 +Δbx +t [m] +0.5 +1.0 +1.5 +2.0 +2.5 +t [s] +−0.010 +−0.005 +0.000 +Δby +t [m] +Fig. 10: Example of GP targets in the ball-and-plate experiment, +i.e. measured ball position changes in X and Y directions. +With regards to the VF model setup, we considered position +memory mq “ 2 and control memory mu “ 1, and we +replaced in both GP inputs ˜xt and policy input x˚, the +occurrences of αp1q +t +and αp2q +t +with their sin-cos expansion. +Analogously to the previous MC-PILCO4PMS experiment, +the kernel function of the VF GP model is given by the +sum of a SE kernel that takes as input the whole GP input +vector, and of a linear kernel that takes as input only the +sin-cos expansion of angular quantities. The control policy +is a squashed-RBF-network with nb “ 400 basis functions. +Policy optimization involves the use of M “ 400 particles. +The initial exploration is implemented in two trials, in +which the control signals are two distinct noisy triangular +waves. Mostly during exploration and initial trials, the ball +might touch the borders of the plate. In those cases, we kept +data up to the collision instant and discarded it thereafter. +The presence of quantization errors was simulated during +particles generation by corrupting predicted angles with a +uniform fictitious measurement noise Up ´0.001 +2 +, 0.001 +2 +q [m]. +A peculiarity of this experiment in comparison to the others +seen before is a wide range of initial conditions. In fact, +the policy must learn how to control the ball to the center +starting from any position on the plate’s surface. Hence, the +initial distribution considered for bx +0 and by +0 is the uniform +Up´0.15, 0.15q [m]. The measurements provided by the +camera are affected by a significant quantization error. For +instance, Fig. 10 reports the measured differences between +consecutive ball positions during a trial. Consider that these +quantities are the targets of the GPs in the VF model. In such +a context, ball velocity estimation can be very challenging. +In fact, for applying MC-PILCO4PMS on the same system, +methods like finite differences and low-pass filtering were not +sufficient, and it was necessary to implement a Kalman filter +(online) and a Kalman smoother (offline), whose tuning was a +delicate and time-consuming procedure of critical importance +for the success of the algorithm. On the contrary, VF-MC- +PILCO managed to solve the task by working directly with +raw position measurements, without the need of applying any +kind of filtering. Besides that, VF-MC-PILCO proved to be +surprisingly data-efficient, being able to solve the task at the + +−0.1 +0.0 +0.1 +bx [m] +−0.15 +−0.10 +−0.05 +0.00 +0.05 +0.10 +0.15 +by [m] +Fig. 11: Ten different ball trajectories obtained by VF-MC-PILCO +policy. Steady-state positions are marked with black crosses. The +dashed circle has the same diameter as that of the ball. +second trial, after only 7.97 seconds of experience, whereas +MC-PILCO4PMS solved the task after 11.33 seconds. +We tested the policy starting from ten different points in +order to compare the two policies obtained by VF-MC-PILCO +(Fig. 11) and MC-PILCO4PMS. The mean steady-state error, +i.e. the average distance of the last ball position from the +center observed in the ten tests, was 0.0134 [m], while MC- +PILCO4PMS final policy obtained a slightly better result, +with a mean error of 0.0099 [m]. This might be due to the +difference between the two policy inputs: MC-PILCO4PM +relies on a Kalman filter, while VF-MC-PILCO works directly +with raw measurements (in presence of significant noise). +Nevertheless, this performance difference is quite negligible, +given the dimension of the ball whose radius is 0.016 [m]. +VII. CONCLUSIONS +We presented VF-MC-PILCO, a novel MBRL algorithm, +specifically designed to learn from scratch how to control +mechanical systems, without the need of computing any +explicit velocity estimates. In our opinion, this may be a +critical advantage when dealing with real systems affected by +significant measurement noise, since, in this kind of scenario, +the design of accurate velocity estimators can be a tedious +task. The algorithm uses GPR to model the joint position +changes, based on the history of past control actions and +measurements. VF-MC-PILCO was tested both in simulated +environments (cart-pole and UR5 robot) as well as in two real +mechanical systems (Furuta pendulum and ball-and-plate rig). +It proved able to solve all the tasks, with results that are in +line with the performance of our previous MBRL algorithm +(MC-PILCO4PMS), which instead works with a complete +state representation and must perform velocity estimation. +REFERENCES +[1] Athanasios S Polydoros and Lazaros Nalpantidis. Survey of model- +based reinforcement learning: Applications on robotics. Journal of +Intelligent & Robotic Systems, 86(2):153–173, 2017. +[2] Richard S Sutton and Andrew G Barto. Reinforcement learning: An +introduction. MIT press, 2018. +[3] Christopher G Atkeson and Juan Carlos Santamaria. A comparison +of direct and model-based reinforcement learning. In Proceedings of +international conference on robotics and automation, volume 4, pages +3557–3564. IEEE, 1997. +[4] Christopher KI Williams and Carl Edward Rasmussen. +Gaussian +processes for machine learning. MIT press Cambridge, MA, 2006. +[5] Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and +data-efficient approach to policy search. In Proceedings of the 28th +International Conference on machine learning (ICML-11), pages 465– +472, 2011. +[6] Marc Peter Deisenroth, Carl Edward Rasmussen, and Dieter Fox. Learn- +ing to control a low-cost manipulator using data-efficient reinforcement +learning. Robotics: Science and Systems VII, pages 57–64, 2011. +[7] Felix Berkenkamp, Matteo Turchetta, Angela Schoellig, and Andreas +Krause. +Safe model-based reinforcement learning with stability +guarantees. In Advances in neural information processing systems, +pages 908–918, 2017. +[8] Konstantinos Chatzilygeroudis, Roberto Rama, Rituraj Kaushik, Dorian +Goepp, Vassilis Vassiliades, and Jean-Baptiste Mouret. Black-box data- +efficient policy search for robotics. In 2017 IEEE/RSJ International +Conference on Intelligent Robots and Systems (IROS), pages 51–58. +IEEE, 2017. +[9] Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey +Levine. +Deep reinforcement learning in a handful of trials using +probabilistic dynamics models. In Advances in Neural Information +Processing Systems, pages 4754–4765, 2018. +[10] Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, and Pieter +Abbeel. +Model-ensemble trust-region policy optimization. +arXiv +preprint arXiv:1802.10592, 2018. +[11] Fabio Amadio, Alberto Dalla Libera, Riccardo Antonello, Daniel +Nikovski, Ruggero Carli, and Diego Romeres. Model-based policy +search using monte carlo gradient estimation with real systems +application. IEEE Transactions on Robotics, 38(6):3879–3898, 2022. +[12] D. Romeres, M. Zorzi, R. Camoriano, S. Traversaro, and A. Chiuso. +Derivative-free online learning of inverse dynamics models. IEEE +Transactions on Control Systems Technology, 28(3):816–830, 2020. +[13] A. Dalla Libera, D. Romeres, D. K. Jha, B. Yerazunis, and D. Nikovski. +Model-based reinforcement learning for physical systems without +velocity and acceleration measurements. IEEE Robotics and Automation +Letters, 5(2):3548–3555, 2020. +[14] Marc Peter Deisenroth, Dieter Fox, and Carl Edward Rasmussen. +Gaussian processes for data-efficient learning in robotics and control. +IEEE transactions on pattern analysis and machine intelligence, +37(2):408–423, 2013. +[15] Diego Romeres, Devesh K Jha, Alberto DallaLibera, Bill Yerazunis, +and Daniel Nikovski. Semiparametrical gaussian processes learning of +forward dynamical models for navigating in a circular maze. In 2019 +International Conference on Robotics and Automation (ICRA), pages +3195–3202. IEEE, 2019. +[16] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. +arXiv preprint arXiv:1312.6114, 2013. +[17] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. +Stochastic backpropagation and approximate inference in deep genera- +tive models. In International conference on machine learning, pages +1278–1286. PMLR, 2014. +[18] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic +optimization. arXiv preprint arXiv:1412.6980, 2014. +[19] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Brad- +bury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, +Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary +DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit +Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An +imperative style, high-performance deep learning library. Advances in +Neural Information Processing Systems 32, pages 8024–8035, 2019. +[20] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics +engine for model-based control. +In 2012 IEEE/RSJ International +Conference on Intelligent Robots and Systems, pages 5026–5033. IEEE, +2012. +[21] Benjamin Seth Cazzolato and Zebb Prime. On the dynamics of the +furuta pendulum. Journal of Control Science and Engineering, 2011, +2011. + diff --git a/j9FPT4oBgHgl3EQf1jVQ/content/tmp_files/load_file.txt b/j9FPT4oBgHgl3EQf1jVQ/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..e97b029412947f48746b42a6eeda5546582819f9 --- /dev/null +++ b/j9FPT4oBgHgl3EQf1jVQ/content/tmp_files/load_file.txt @@ -0,0 +1,580 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf,len=579 +page_content='Learning Control from Raw Position Measurements Fabio Amadio1, Alberto Dalla Libera1, Daniel Nikovski2, Ruggero Carli1, Diego Romeres2 Abstract— We propose a Model-Based Reinforcement Learn- ing (MBRL) algorithm named VF-MC-PILCO, specifically designed for application to mechanical systems where veloc- ities cannot be directly measured.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' This circumstance, if not adequately considered, can compromise the success of MBRL approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' To cope with this problem, we define a velocity- free state formulation which consists of the collection of past positions and inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Then, VF-MC-PILCO uses Gaussian Process Regression to model the dynamics of the velocity- free state and optimizes the control policy through a particle- based policy gradient approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' We compare VF-MC-PILCO with our previous MBRL algorithm, MC-PILCO4PMS, which handles the lack of direct velocity measurements by modeling the presence of velocity estimators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Results on both simu- lated (cart-pole and UR5 robot) and real mechanical systems (Furuta pendulum and a ball-and-plate rig) show that the two algorithms achieve similar results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Conveniently, VF-MC- PILCO does not require the design and implementation of state estimators, which can be a challenging and time-consuming activity to be performed by an expert user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' INTRODUCTION Model-Based Reinforcement Learning (MBRL) [1] proved to be a promising strategy to overcome the challenges of delivering Reinforcement Learning (RL) [2] solutions to real- world problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In fact, standard Model-Free RL algorithms usually require a massive amount of interaction with the systems to solve the assigned task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' This requirement might be unfeasible in many real-world applications, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' control of mechanical systems and robotics, due to the limited time available and the risk of damaging the devices involved in such a long training phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' On the other hand, MBRL uses the collected data to train a predictive model of the environment and updates the policy based on model simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' With this strategy, we are able to extract more valuable information from the available data and increase data efficiency [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Nevertheless, the effectiveness of MBRL methods strongly depends on how accurately the trained model can simulate the behavior of the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' For this reason, it is necessary to adopt stochastic models in order to capture uncertainty about predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Different classes of models have been employed, from Gaussian Processes (GPs) [4] in [5], [6], [7], [8], to ensembles of probabilistic deep neural networks in [9], [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The application of this kind of method to real-world environments is affected by another major problem: the full state of a real system is often only partially measurable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 1 Fabio Amadio, Alberto Dalla Libera, and Ruggero Carli are with the Department of Information Engineering, University of Padova, Via Gradenigo 6/B, 35131 Padova, Italy [fabio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='amadio@phd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='unipd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='it, dallaliber@dei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='unipd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='it, carlirug@dei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='unipd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='it].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 3 Diego Romeres and Daniel Nikovski are with Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA 02139 [romeres@merl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='com, nikovski@merl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='com] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' For instance, when dealing with mechanical systems, joint positions can be measured by means of proper sensors, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' encoders, while velocities can only be estimated from the history of sampled positions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In our previous work [11], we proposed an MBRL algorithm, called MC-PILCO4PMS, specifically tailored to deal with Partially Measurable Systems and take correctly into account the presence of online and offline state observers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' It proved able to robustly learn from scratch how to control mechanical systems, in both simulated and real environments even when the velocity is not directly measurable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' However, the tuning of accurate filters and state estimators could be particularly challenging and time- consuming for systems affected by significant noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In this work, we present an alternative approach, called VF-MC-PILCO, that completely circumvents the necessity of estimating velocities, by working only with the history of measured positions and applied control actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' We adopted a Velocity-Free (VF) predictive model, similar to the one proposed in [12], [13], together with a control policy whose input depends only on positions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Compared to the works in [12], [13], which were focused on the modeling part, in this work we propose a complete VF solution to the RL problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The proposed method is first tested in two simulated systems with an increasing number of DoF i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=', a cart-pole and a 6 DoF UR5 robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Then, VF-MC-PILCO is tested in two real systems with an increasing level of difficulty for the velocity estimation i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=', a Furuta pendulum equipped with encoders and a ball-and-plate system equipped with an external camera to infer the ball positions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' VF-MC- PILCO correctly solved all the tasks with performance similar to the one obtained by MC-PILCO4PMS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' To solve such tasks, MC-PILCO4PMS must accurately reproduce the online filter employed inside the policy optimization algorithm and implement an offline filtering procedure for estimating the velocities needed for modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' On the contrary, VF-MC- PILCO, by working directly with raw measurements, presents an alternative way that requires less effort and expertise from the user, obtaining similar performance despite the presence of significant noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The comparisons are carried against MC- PILCO4PMS because this algorithm was shown to outperform other s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='o.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' MBRL algorithms in [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The remainder of this paper is structured as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' II, we formulate the problem we aim to solve, as well as describe the use of GPs for modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' III, we detail the proposed algorithm, VF-MC-PILCO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Section IV illustrates the validation conducted on the simulated cart-pole benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Section V reports the experiment performed on a simulated UR5 robot to test VF-MC-PILCO capacity of handling systems up to 6 DoF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Section VI shows the results arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='13183v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='RO] 30 Jan 2023 of the experiments with the two real mechanical systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Finally, we draw conclusions in Section VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' BACKGROUND In this section, we first introduce the problem of MBRL on real mechanical systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Then, we briefly discuss how Gaussian Process Regression (GPR) is usually used for modeling purposes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Problem Formulation Consider a mechanical system with dq degrees of freedom, and denote by xt its state at time t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Typically, xt is defined as xt “ rqT t , 9qT t sT , where qt P Rdq and 9qt P Rdq are, respectively, the vector of joint positions and velocities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Assume that only qt can be directly measured, whereas 9qt is not directly measurable, but instead must be estimated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' We argue that this is a common scenario as mechanical systems are often equipped with position sensors such as encoders, but lack velocity sensors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' More accurate velocity estimates can be obtained from the history of position measurements, exploiting both past as well as future samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' These kinds of estimation techniques are intrinsically acausal, hence they can be performed offline and used only for modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Thus, controllers must rely on causal online estimates, which are usually less accurate and affected by delays.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The discrete-time dynamics of the system is given by xt`1 “ fpxt, utq `wt, where fp¨q is an unknown transition function, ut P Rdu represents the control action, and wt „ Np0, Σwq models uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' RL algorithms aim to learn how to accomplish a given task based on interaction data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The task is encoded by a cost function cpxtq, defined to characterize the immediate penalty associated with being in state xt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Control actions are chosen from a policy u “ πθpxq, parameterized by θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Then, the objective is to find the policy that minimizes the expected cumulative cost over T time steps, starting from the initial state distribution ppx0q, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=', Jpθq “ Tÿ t“0 Ext rc pxtqs .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' (1) An MBRL algorithm consists, generally, of the succession of several trials, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' attempts to solve the desired task, and each trial is structured in three main phases: ‚ Model Learning: data collected during previous interac- tions are used to learn a model of the system dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' At the beginning, first data are collected by applying an exploratory policy, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' random control actions;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' ‚ Policy Update: the control policy is optimized in order to minimize an estimate of the cost Jpθq obtained by exploiting the trained model;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' ‚ Policy Execution: the updated control policy is applied to the system and interaction data are stored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In order to comply with the common conditions in real mechanical systems described above, we propose an MBRL algorithm to control mechanical systems without assuming to have neither measurements nor estimations of velocities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' GPR for Model Learning Given a data set of state-action pairs measured during the interactions with the system, it is possible to use GPR to train a probabilistic model that approximates the unknown transition function fp¨q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' A common strategy in the literature [14], [15] is to model the evolution of each state dimension with a distinct zero mean GP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Let us indicate with xpiq the i-th component of the state, for i P t1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , dxu (dx is the dimension of the state vector), and define ˜xt “ rxT t , uT t sT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The i-th GP takes ˜xt as input, and predicts xpiq t`1 ´ xpiq t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The GPs are completely characterized by their kernel functions that represent our belief on the a priori covariance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' A common choice is the Squared Exponential (SE) kernel, kSEp˜xtj, ˜xtkq :“ λ2e´||˜xtj ´˜xtk ||2 Λ´1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' (2) Given a data set of state-action pairs D, the GPs provide a closed form expression of ppxt`1|˜xt, Dq, the posterior distribution of the estimated state at time t ` 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' For further details about GPR and its application to dynamical system modeling, the readers can refer to [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' VELOCITY-FREE MC-PILCO Here we present the algorithm VF-MC-PILCO (Velocity Free Monte Carlo Probabilistic Inference for Learning COntrol), whose objective is to solve the problem defined in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' II-A without the need of performing any kind of velocity estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In fact, tuning effective estimators may be a tedious and complex operation, especially in the presence of high measurement noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' This might significantly compromise the MBRL algorithm, if not duly considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' VF-MC-PILCO circumvents these issues by adopting a VF formulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Inspired by [12], [13], we consider a VF model of the system dynamics, given in the following general form qt`1 “ fdfpqt, qt´1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , qt´mq, ut, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , ut´muq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' (3) The joint positions at the next time step are predicted based on the history of the past positions, from t up to t ´ mq, and the history of applied control actions, from t up to t ´ mu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Let mq and mu be called, respectively, the position memory and the control memory of the VF model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In this new VF framework, it is convenient to redefine the state of the system as xt “ rqT t , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , qT t´mq, uT t´1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , uT t´musT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In the following, we present the model learning and policy update phases of the VF-MC-PILCO algorithm, detailing how they have been adapted to the new VF formulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' VF Model Learning We employ the GPR framework of Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' II-B, but instead of considering a full state representation with velocities, we train a VF GP model of form (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Let us denote with qpiq t the position of the i-th joint at time t, and define ∆piq qt “ qpiq t`1 ´qpiq t , for i P t1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , dqu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The evolution of ∆piq qt for all i is modeled using a distinct GP, whose input depends upon rqT t , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , qT t´mq, uT t , uT t´1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , uT t´musT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Trivially, the transition functions of qt´1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , qt´mq, ut´1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , ut´mu are deterministic and known.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Experimentally, we found it beneficial in terms of data efficiency (details in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' IV-C) to rearrange GP input as ˜xt “ ” qT t , ∆T qt´1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , ∆T qt´mq , uT t , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , uT t´mu ıT , (4) where ∆qt´i “ pqt´i`1´qt´iq, for i “ 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , mq, following the same notation used before when defining the GP targets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In this way, we are providing the model with additional information about the rates of change observed inside the considered past position memory interval mq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Depending on the considered application, it may be convenient to further modify the GP input vector w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' (4), in order to exploit particular characteristics of the considered quantities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' For instance, we applied a sin-cos expansion to angular quantities during some of the experiments presented in the next sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' VF Particle-Based Policy Gradient The GP-based VF predictive model of Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' III-A is now employed to optimize the policy parameters θ following a particle-based policy gradient strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' VF-MC-PILCO computes ˆJpθq, an approximation of Jpθq in (1) exploiting the posterior distribution ppxt`1|˜xt, Dq defined by the GPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Finally, it updates the θ with a gradient-based procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The computation of ˆJpθq entails the simulation of the effects of the policy πθ on M independent state particles by cascading the one-step-ahead stochastic predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In particular, let qpmq t , for m “ 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , M, represent the position of the M state particles simulated by the VF GP model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Starting positions are sampled from a given distribution ppq0q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' We assume that the system is not moving at t “ 0, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=', qpmq 0 “ qpmq ´1 “ ¨ ¨ ¨ “ qpmq ´mq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' At each time step t, in order to simulate the presence of measurement error, we corrupt the particle positions qpmq t with a fictitious noise epmq t , e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' a zero mean Gaussian i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' noise, obtaining a set of simulated measurements ¯qpmq t “ qpmq t ` epmq t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Then, for each particle, the policy πθ selects the next control actions upmq t according to the history of the simulated measurements, ¯xpmq t “ r¯qpmqT t , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , ¯qpmqT t´mqsT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Finally, the M positions at the next time step, t ` 1, are simulated by forward sampling from the distributions derived by the VF GP model ppxpmq t`1|˜xpmq t , Dq (for m “ 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' M) with ˜xpmq t defined for each particle m as in (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' This procedure is iterated for T time steps, obtaining M different particle trajectories ttxpmq t uM m“1uT t“0, that simulate the results of the policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The particle generation procedure is depicted in the block scheme of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The sample mean of the costs incurred by the different particles provides an estimate of the expected cumulative cost, namely ˆJpθq “ Tÿ t“0 ˜ 1 M M ÿ m“1 c ´ xpmq t ¯¸ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' (5) The computational graph resulting from (5) allows us to compute ∇θ ˆJpθq, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=', the gradient of ˆJpθq w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' θ, through backpropagation, exploiting the reparametrization trick [16], [17] to propagate the gradient through the stochastic oper- ations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Finally, a stochastic gradient descent algorithm, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Adam [18], can exploit the estimated gradient to update θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Policy VF GP Model + Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 1: VF-MC-PILCO particles generation block schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Policy structure We considered an RBF network policy with outputs limited by a hyperbolic tangent function, properly scaled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' We call this function squashed-RBF-network, and it is expressed as πθpx˚q “ umax tanh ˜ 1 umax nb ÿ i“1 wie||ai´x˚||2 Σπ ¸ , (6) The input vector of the policy is defined as x˚ t “ ” qT t , ∆T qt´1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , ∆T qt´mq ıT , (7) where we are providing the policy with the same consecutive differences of position measures used for GP input in (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The policy parameters are θ “ tw, A, Σπu, where w “ rw1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' wnbs and A “ ta1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' anbu are, respectively, the weights and the centers of the nb basis functions, while Σπ is a diagonal matrix that determines theirs shapes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The maximum control umax is constant and depends on the application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' It is worth mentioning that VF-MC-PILCO is not restricted to this particular choice of policy function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' SIMULATED EXPERIMENT: CART-POLE SWING-UP As a preliminary validation, we tested VF-MC-PILCO on a simulated cart-pole swing-up task to analyze its performance under different setups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' We compare the proposed approach with the s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='o.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' MBRL algorithm specifically designed to deal with partial state measurability of real mechanical systems, MC-PILCO4PMS [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' MC-PILCO4PMS follows a particle-based policy gradient framework similar to the one depicted in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' III-B, but, differently from the proposed VF- MC-PILCO, it works with velocity estimates by simulating not only the evolution of the system state but also the evolution of the estimated state, which entails modeling the measurement system and the implemented online filters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Notice that the implementation of MC-PILCO4PMS could be in some cases complex or time-consuming, due to its requirement to reproduce the online filtering procedure inside the policy optimization phase, and the need to adopt a different offline filter for model learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' This is the limitation that the proposed method aims to solve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Both algorithms have been implemented in Python1, exploiting the PyTorch library [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 1https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='merl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='com/research/license/MC-PILCO Now, let us briefly describe the characteristics of the simulated scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Let pt and αt be, respectively, the position of the cart and the angle of the pole at time step t, hence qt “ rpt, αtsT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The target configurations corresponding to the pendulum swing-up are given by pdes “ 0 [m] and |αdes| “ π [rad].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The cart-pole starts from θ0 “ 0 [rad] and p0 “ 0 [m].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The control action is the force applied to the cart, and the system is controlled at 30 [Hz].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' We considered a Gaussian measurement noise with standard deviation of 10´3 [m] for positions and 2 ¨ 10´3 [rad] for angles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The GPs of the VF model are equipped with the SE kernel described in (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The policy adopted is a squashed-RBF- network policy with nb “ 200 basis functions and umax “ 10 [N].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The number of particles is set to M “ 400 during policy optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In order to avoid singularities due to the angles, we replaced, in both the model inputs ˜xt defined in (4) and policy input x˚ defined in (7), occurrences of αt with sinpαtq and cospαtq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Exploration data were collected by random actions, obtained by filtering Gaussian white noise with cut-off frequency 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='5 [Hz].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The cost function is cpxtq “ 1 ´ exp ˜ ´ ˆ|αt| ´ π lα ˙2 ´ ˆpt lp ˙2¸ , (8) where lα “ 3 and lp “ 1 define the shape of cp¨q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The absolute value on αt is needed to allow different swing-up solutions to both the equivalent target pole angles π and ´π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The objective is to analyze different VF-MC-PILCO configurations and compare their performance with the results obtained by MC-PILCO4PMS, as a benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' We analyzed the results obtained in 50 distinct experiments, consisting of 5 trials of length 3 seconds, varying the random seed each time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In this way, it is possible to evaluate the robustness of the algorithm to different exploration trajectories and policy initialization, as well as to different noise realizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In particular, we investigate the effects that different position and control memories, mq and mu, have on modeling and policy learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' We studied four different VF-MC-PILCO configurations, choosing the value of mq between 1 and 2, and mu between 0 and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In the following, we will refer to these different alternatives with the symbol V F mu mq .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Modeling results We compared the accuracy of the different VF GP models by looking at the absolute values of the prediction errors observed on the data registered at the last trial in all 50 experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Models were trained using all the data collected up to that trial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 2 reports the results by means of box plots, showing median values, confidence intervals, and outliers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The results show that the presence of input history, ut´1, as part of the GP inputs is beneficial, and within this choice, see models with mu “ 1, the best results are obtained by V F mu“1 mq“2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' On the other hand, the greater errors and the significant number of outliers obtained by models with mu “ 0 seem to indicate that these kinds of setups are not fully capable of fitting the registered position changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Also, it appears that using a longer position memory leads to an improvement of prediction accuracy only when the control memory is mu “ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' We can 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='005 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='010 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='015 |error| p [m] VFmu = 0 mq = 1 VFmu = 0 mq = 2 VFmu = 1 mq = 1 VFmu = 1 mq = 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='04 |error| α [rad] Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 2: Absolute p and α prediction errors obtained by different VF GP models at trial 5 in the simulated cart-pole experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' conclude that it seems beneficial to provide VF GP models with information about past control actions (mu “ 1) for fitting the system dynamics without relying on velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Policy learning results In this section, we evaluate the performance of the control policies learned by the different VF-MC-PILCO setups and by MC-PILCO4PMS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Notice that MC-PILCO4PMS achieved results comparable to or better than other state-of-the-art GP- based MBRL algorithms, see [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The cumulative costs and success rates obtained at each trial in the 50 experiments are reported in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In the two plots, the cumulative cost is reported in terms of median values and confidence intervals defined by the 5-th and 95-th percentiles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' As one would expect, the worse modeling results of V F mu“0 mq“1 and V F mu“0 mq“2 lead to an unsatisfactory policy learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' These VF-MC-PILCO setups manage to complete a successful swing-up only in, approximately, half of the cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' On the other hand, when using mu “ 1, VF-MC-PILCO is able to robustly find an optimal solution for the task by trial 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In particular, the performance of V F mu“1 mq“2 are almost equivalent to the results of MC-PILCO4PMS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' This result confirms the effectiveness of the proposed method: with less information, as we are not manually tuning any velocity estimator, VF-MC- PILCO achieves state-of-the-art performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' For the user, this corresponds to less effort and a more general method without compromising significantly the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Analysis of input vector structure Before concluding this section, we would like to analyze the reasons behind the decision to use (4) and (7) as GP and policy input vectors, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In this respect, we compared the results obtained by VF-MC-PILCO with position memory mq “ 2 and control memory mu “ 1 using two different structures for the input vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The first employs directly the history of positions and actions up to time step t as GP input, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=', ˜xt “ rqT t , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , qT t´mq, uT t , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , uT t´musT , and the history of positions as policy input, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=', x˚ t “ rqT t , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , qT t´mqsT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The second version is the one employed 1 2 3 4 5 trials 10 20 30 40 50 60 70 80 90 cost optimal swing − up VFmu = 0 mq = 1 VFmu = 0 mq = 2 1 2 3 4 5 trials VFmu = 1 mq = 1 VFmu = 1 mq = 2 PMS Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 3: Cumulative costs registered during simulated cart-pole experiments from the four considered VF-MC-PILCO setups and the MC-PILCO4PMS benchmark (indicated by the shorthand PMS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The observed success rates are presented in the table below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Trial 1 Trial 2 Trial 3 Trial 4 Trial 5 V F mu“0 mq“1 0% 14% 34% 46% 56% V F mu“0 mq“2 0% 10% 18% 28% 52% V F mu“1 mq“1 0% 8% 52% 86% 96% V F mu“1 mq“2 0% 20% 73% 93% 100% PMS 0% 14% 82% 98% 96% previously with GP input and policy input defined as in (4) and (7), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' To distinct the two implementations, we labeled the first as V F mu“1 mq“2 naive, and the second V F mu“1 mq“2 with position differences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' We analyzed the results obtained by these two setups in 50 distinct experiments, consisting of 7 trials of length 3 seconds, varying the random seed each time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The obtained cumulative costs are reported in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 4 in terms of median values and 5-95 percentile ranges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' It is clear that providing information about the rate of change of position measures, by using (4) and (7) as input vectors, greatly improves the data efficiency of VF-MC- PILCO algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In fact, the V F mu“1 mq“2 naive implementation (that uses directly the history of position and controls) shows a much slower convergence, reaching a 79% success rate only at trial 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' On the other hand, V F mu“1 mq“2 with position differences is able to always find a solution by trial 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' This result underlines the importance of the information carried out by the differences between consecutive measured positions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Without this information, the model needs more data to correctly capture the dynamics of the system relying only on positions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Through input vectors (4) and (7), we are able to provide the model with knowledge about a sort of velocity, without requiring any kind of filtering procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' SIMULATED EXPERIMENT: UR5 ROBOT CONTROL The objective of this experiment is to test VF-MC-PILCO in a more complex system with a higher DoF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' We used VF-MC-PILCO to learn a joint-space torque controller for a UR5, a robotic manipulator with 6 DoF, simulated in 0 1 2 3 4 5 6 7 trials 10 20 30 40 50 60 70 80 cost optimal swing − up VFmu = 1 mq = 2 naive VFmu = 1 mq = 2 with pos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' diff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 4: Cumulative costs registered by VF-MC-PILCO with different GP and policy input structures that include past positions either as consecutive differences (∆qt´1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , ∆qt´mq ) or directly (qt´1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' , qt´mq).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' MuJoCo [20], assuming to measure only joint angles and not velocities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Measurements are perturbed by the presence of white Gaussian noise with a standard deviation of 10´3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Let us denote with qt P R6 the joint angles and with ut P R6 the applied torques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Our objective is to learn a VF control policy able to follow a desired trajectory tqr tuT t“1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Let et “ qr t ´ qt denote the position error at time t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' VF- MC-PILCO memories were set to mq “ 2 and mu “ 1, hence the VF state of the system at time step t is defined as xt “ rqT t , qT t´1, qT t´2, uT t´1sT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The GP input vector was defined applying a sin-cos expansion of angular quantities as ˜xt “ rsinpqtqT , cospqtqT , ∆T qt´1, ∆T qt´2, uT t , uT t´1sT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The policy adopted was a multi-output squashed-RBF- network with nb “ 400 basis functions and umax “ 1 [N¨m] for all the joints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' M “ 200 particles were used during optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The policy takes in input the vector x˚ t “ rsinpqtqT , cospqtqT , ∆T qt´1, ∆T qt´2, eT t sT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 5 represents the overall control scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Policy + UR5 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 5: VF-MC-PILCO control scheme for the simulated UR5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In this experiment, we considered a control horizon of 4 seconds with a sampling time of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='02 seconds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The reference trajectory has been calculated to make the end-effector draw a circle in the X-Y operational space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The initial exploration used to initialize the VF GP model is provided by a poorly- tuned PD controller (for which we estimated velocity by backward differentiation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' We used M “ 200 of particles for gradient estimation and considered the following cost, cpxtq “ 1 ´ exp ` ||qr t ´ qt||2˘ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Exploration Final Policy 100 101 102 Avg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' EE error [mm] Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 6: Average end-effector position tracking errors obtained during the exploratory phases and by the control policy learned at trial 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' −700 −650 −600 −550 −500 X [mm] −225 −200 −175 −150 −125 −100 −75 −50 −25 Y [mm] Exploration Final Policy Reference Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 7: Example of explorative and final end-effector trajectories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The experiment was repeated 10 different times, varying the random seed and the initial exploration trajectories, obtained each time by using random PD gains, uniformly sampled from KP „ Up0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='5, 2q and KD „ Up0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='01, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='2q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' VF-MC-PILCO managed to learn an effective control policy by the third trial in all the repetitions, with average positioning errors not superior to 2 [mm].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The average end-effector tracking errors obtained are reported in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 6, where results are given by means of box plots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 7 shows an example of exploratory and final trajectories, taken from one of the conducted tests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' EXPERIMENTS ON REAL MECHANICAL SYSTEMS In this section, we report the results obtained by VF- MC-PILCO when applied to real systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In particular, we experimented on two benchmark systems: a Furuta pendulum, and a ball-and-plate (Figure 8)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The objective is to compare the performance obtained by VF-MC-PILCO in these two setups with the results of MC-PILCO4PMS reported in [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 2A video of the experiments on real mechanical systems is available at the following link https://youtu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='be/Hx3Y1Ib-6Tc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Arm Pendulum Base Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 8: (Left) Furuta pendulum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' (Right) Ball-and-plate system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Furuta pendulum The Furuta pendulum [21] is a popular nonlinear control benchmark composed of two revolute joints and three links (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 8, left).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' It is an under-actuated system as only the horizontal joint is actuated by a DC servomotor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The two angles are measured by optical encoders with 4096 [ppr].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The control action is the motor voltage, and its maximum allowed value is 10 [V].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Let the pose at time step t be qt “ rαh t , αv t sT , where αh t is the angle of the horizontal joint and αv t the angle of the vertical joint attached to the pendulum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The objective is to learn how to swing-up the pendulum and stabilize it in the upward equilibrium (αv t “ ˘π [rad]) with αh t “ 0 [rad], starting from q0 “ r0, 0sT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The trial length is 3 [s] and the system is controlled at 30 [Hz].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The cost is defined as cpxtq “ 1 ´ exp ˜ ´ ˆαh t 2 ˙2 ´ ˆ|αv t | ´ π 2 ˙2¸ ` cbpαh t q, (9) with cbpαh t q “ 1 1 ` exp ` ´10 ` ´ 3 4π ´ αh t ˘˘ ` 1 1 ` exp ` ´10 ` αh t ´ 3 4π ˘˘.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The first part of the function in (9) aims at driving the two angles towards the target, while cbpαh t q penalizes solutions that push the horizontal joint beyond a certain safety threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In this scenario, we used position memory mq “ 2 and control memory mu “ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' We equipped the VF GP model with an SE kernel and adopted a squashed-RBF-network with nb “ 200 basis functions as control policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' M “ 400 particles were simulated during policy optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' We replaced, in both GP inputs ˜xt and policy input x˚, occurrences of αh t and αv t with their sin-cos expansion, as previously done in the simulated cart-pole case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The exploration trajectory has been obtained using as input a sum of ten cosine waves of random frequencies and the same amplitudes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The presence of quantization errors was simulated during particles generation by corrupting predicted angles with a uniform fictitious measurement noise Up ´π 4096, π 4096q [rad].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' −2 0 αv [rad] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='0 t [s] −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='0 αh [rad] Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 9: Real swing-up trajectory (bullets) and particles prediction (shaded lines) obtained by VF-MC-PILCO at trial 6 of the Furuta pendulum experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' VF-MC-PILCO learned how to swing-up the Furuta pendulum at trial 6, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' after 18 seconds of experience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' That is the same result obtained by MC-PILCO4PMS when using the SE kernel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Hence, the VF approach showed no particular differences in terms of data efficiency when compared with an approach that makes use of velocity estimates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 9, we report the successful swing-up performed by VF-MC- PILCO at trial 6, together with the particles predicted by the VF GP model, simulating the effects of the same control policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Notice how the particles’ trajectories resemble almost perfectly the real behaviour of the two angles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Ball-and-plate The ball-and-plate system is composed of a square plate that can be tilted in two orthogonal directions, and a ball that is free to roll over it (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 8, right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' A camera is placed on top of the system to track the ball and measure its position on the plate, with a precision of one millimeter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Let, at time t, pbx t , by t q be the position of the center of the ball, while αp1q t and αp2q t are the angles of the two motors tilting the plate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Thus, qt “ rbx t , by t , αp1q t , αp2q t sT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The drivers of the motors allow only position control and do not provide feedback about the motors’ angles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' To keep track of them, we defined the control actions as the difference between two consecutive reference values sent to the motor, and we limited the maximum input to a sufficiently small value, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 4 [deg], such that the motor controllers are able to reach the target within the sampling time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Then, as a first approximation, the reference angles, and the actual motor angles coincide, and we have up1q t “ αp1q t`1 ´ αp1q t and up2q t “ αp2q t`1 ´ αp2q t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The objective of the experiment is to learn how to control the motor angles in order to stabilize the ball around the center of the plate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The trial length is 3 seconds, with a sampling frequency of 30 [Hz].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The cost function encoding the task is cpxtq “ 1 ´ exp p´gpxtqq , with gpxtq “ ˆ bx t 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='15 ˙2 ` ˆ by t 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='15 ˙2 ` ´ αp1q t ¯2 ` ´ αp2q t ¯2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='002 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='002 Δbx t [m] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='5 t [s] −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='010 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='005 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='000 Δby t [m] Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 10: Example of GP targets in the ball-and-plate experiment, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' measured ball position changes in X and Y directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' With regards to the VF model setup, we considered position memory mq “ 2 and control memory mu “ 1, and we replaced in both GP inputs ˜xt and policy input x˚, the occurrences of αp1q t and αp2q t with their sin-cos expansion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Analogously to the previous MC-PILCO4PMS experiment, the kernel function of the VF GP model is given by the sum of a SE kernel that takes as input the whole GP input vector, and of a linear kernel that takes as input only the sin-cos expansion of angular quantities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The control policy is a squashed-RBF-network with nb “ 400 basis functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Policy optimization involves the use of M “ 400 particles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The initial exploration is implemented in two trials, in which the control signals are two distinct noisy triangular waves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Mostly during exploration and initial trials, the ball might touch the borders of the plate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In those cases, we kept data up to the collision instant and discarded it thereafter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The presence of quantization errors was simulated during particles generation by corrupting predicted angles with a uniform fictitious measurement noise Up ´0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='001 2 , 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='001 2 q [m].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' A peculiarity of this experiment in comparison to the others seen before is a wide range of initial conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In fact, the policy must learn how to control the ball to the center starting from any position on the plate’s surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Hence, the initial distribution considered for bx 0 and by 0 is the uniform Up´0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='15, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='15q [m].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The measurements provided by the camera are affected by a significant quantization error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' For instance, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 10 reports the measured differences between consecutive ball positions during a trial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Consider that these quantities are the targets of the GPs in the VF model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In such a context, ball velocity estimation can be very challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In fact, for applying MC-PILCO4PMS on the same system, methods like finite differences and low-pass filtering were not sufficient, and it was necessary to implement a Kalman filter (online) and a Kalman smoother (offline), whose tuning was a delicate and time-consuming procedure of critical importance for the success of the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' On the contrary, VF-MC- PILCO managed to solve the task by working directly with raw position measurements, without the need of applying any kind of filtering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Besides that, VF-MC-PILCO proved to be surprisingly data-efficient, being able to solve the task at the −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='1 bx [m] −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='15 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='10 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='15 by [m] Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 11: Ten different ball trajectories obtained by VF-MC-PILCO policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Steady-state positions are marked with black crosses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The dashed circle has the same diameter as that of the ball.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' second trial, after only 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='97 seconds of experience, whereas MC-PILCO4PMS solved the task after 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='33 seconds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' We tested the policy starting from ten different points in order to compare the two policies obtained by VF-MC-PILCO (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' 11) and MC-PILCO4PMS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The mean steady-state error, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' the average distance of the last ball position from the center observed in the ten tests, was 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='0134 [m], while MC- PILCO4PMS final policy obtained a slightly better result, with a mean error of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='0099 [m].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' This might be due to the difference between the two policy inputs: MC-PILCO4PM relies on a Kalman filter, while VF-MC-PILCO works directly with raw measurements (in presence of significant noise).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Nevertheless, this performance difference is quite negligible, given the dimension of the ball whose radius is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='016 [m].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' CONCLUSIONS We presented VF-MC-PILCO, a novel MBRL algorithm, specifically designed to learn from scratch how to control mechanical systems, without the need of computing any explicit velocity estimates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In our opinion, this may be a critical advantage when dealing with real systems affected by significant measurement noise, since, in this kind of scenario, the design of accurate velocity estimators can be a tedious task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' The algorithm uses GPR to model the joint position changes, based on the history of past control actions and measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' VF-MC-PILCO was tested both in simulated environments (cart-pole and UR5 robot) as well as in two real mechanical systems (Furuta pendulum and ball-and-plate rig).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' It proved able to solve all the tasks, with results that are in line with the performance of our previous MBRL algorithm (MC-PILCO4PMS), which instead works with a complete state representation and must perform velocity estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' REFERENCES [1] Athanasios S Polydoros and Lazaros Nalpantidis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Survey of model- based reinforcement learning: Applications on robotics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Journal of Intelligent & Robotic Systems, 86(2):153–173, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [2] Richard S Sutton and Andrew G Barto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Reinforcement learning: An introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' MIT press, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [3] Christopher G Atkeson and Juan Carlos Santamaria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' A comparison of direct and model-based reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In Proceedings of international conference on robotics and automation, volume 4, pages 3557–3564.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' IEEE, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [4] Christopher KI Williams and Carl Edward Rasmussen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Gaussian processes for machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' MIT press Cambridge, MA, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [5] Marc Deisenroth and Carl E Rasmussen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Pilco: A model-based and data-efficient approach to policy search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In Proceedings of the 28th International Conference on machine learning (ICML-11), pages 465– 472, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [6] Marc Peter Deisenroth, Carl Edward Rasmussen, and Dieter Fox.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Learn- ing to control a low-cost manipulator using data-efficient reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Robotics: Science and Systems VII, pages 57–64, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [7] Felix Berkenkamp, Matteo Turchetta, Angela Schoellig, and Andreas Krause.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Safe model-based reinforcement learning with stability guarantees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In Advances in neural information processing systems, pages 908–918, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [8] Konstantinos Chatzilygeroudis, Roberto Rama, Rituraj Kaushik, Dorian Goepp, Vassilis Vassiliades, and Jean-Baptiste Mouret.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Black-box data- efficient policy search for robotics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 51–58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' IEEE, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [9] Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Deep reinforcement learning in a handful of trials using probabilistic dynamics models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, pages 4754–4765, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [10] Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, and Pieter Abbeel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Model-ensemble trust-region policy optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' arXiv preprint arXiv:1802.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='10592, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [11] Fabio Amadio, Alberto Dalla Libera, Riccardo Antonello, Daniel Nikovski, Ruggero Carli, and Diego Romeres.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Model-based policy search using monte carlo gradient estimation with real systems application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' IEEE Transactions on Robotics, 38(6):3879–3898, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [12] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Romeres, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Zorzi, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Camoriano, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Traversaro, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Chiuso.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Derivative-free online learning of inverse dynamics models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' IEEE Transactions on Control Systems Technology, 28(3):816–830, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [13] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Dalla Libera, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Romeres, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Jha, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Yerazunis, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Nikovski.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Model-based reinforcement learning for physical systems without velocity and acceleration measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' IEEE Robotics and Automation Letters, 5(2):3548–3555, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [14] Marc Peter Deisenroth, Dieter Fox, and Carl Edward Rasmussen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Gaussian processes for data-efficient learning in robotics and control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' IEEE transactions on pattern analysis and machine intelligence, 37(2):408–423, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [15] Diego Romeres, Devesh K Jha, Alberto DallaLibera, Bill Yerazunis, and Daniel Nikovski.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Semiparametrical gaussian processes learning of forward dynamical models for navigating in a circular maze.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In 2019 International Conference on Robotics and Automation (ICRA), pages 3195–3202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' IEEE, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [16] Diederik P Kingma and Max Welling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Auto-encoding variational bayes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' arXiv preprint arXiv:1312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='6114, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [17] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Stochastic backpropagation and approximate inference in deep genera- tive models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In International conference on machine learning, pages 1278–1286.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' PMLR, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [18] Diederik P Kingma and Jimmy Ba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Adam: A method for stochastic optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content='6980, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [19] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Brad- bury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Pytorch: An imperative style, high-performance deep learning library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 32, pages 8024–8035, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [20] Emanuel Todorov, Tom Erez, and Yuval Tassa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Mujoco: A physics engine for model-based control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5026–5033.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' IEEE, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' [21] Benjamin Seth Cazzolato and Zebb Prime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' On the dynamics of the furuta pendulum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} +page_content=' Journal of Control Science and Engineering, 2011, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/j9FPT4oBgHgl3EQf1jVQ/content/2301.13183v1.pdf'} diff --git a/k9FQT4oBgHgl3EQfnDbQ/content/tmp_files/2301.13368v1.pdf.txt b/k9FQT4oBgHgl3EQfnDbQ/content/tmp_files/2301.13368v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..d739eeb656b42112ca2a787010c0e1d62795c2a2 --- /dev/null +++ b/k9FQT4oBgHgl3EQfnDbQ/content/tmp_files/2301.13368v1.pdf.txt @@ -0,0 +1,1239 @@ +Misspecification-robust Sequential Neural +Likelihood +Ryan P. Kelly⋆,†, David J. Nott‡,§, David T. Frazier¶, David J. Warne⋆,†, and +Christopher Drovandi⋆,† +⋆School of Mathematical Sciences, Queensland University of Technology, Australia +†Centre for Data Science, Queensland University of Technology, Australia +‡Department of Statistics and Data Science, National University of Singapore +§Institute of Operations Research and Analytics, National University of Singapore +¶Department of Econometrics and Business Statistics, Monash University, +Australia +February 1, 2023 +Abstract +Simulation-based inference (SBI) techniques are now an essential tool for the pa- +rameter estimation of mechanistic and simulatable models with intractable likelihoods. +Statistical approaches to SBI such as approximate Bayesian computation and Bayesian +synthetic likelihood have been well studied in the well specified and misspecified set- +tings. However, most implementations are inefficient in that many model simulations +are wasted. Neural approaches such as sequential neural likelihood (SNL) have been +developed that exploit all model simulations to build a surrogate of the likelihood +function. However, SNL approaches have been shown to perform poorly under model +misspecification. +In this paper, we develop a new method for SNL that is robust +to model misspecification and can identify areas where the model is deficient. We +demonstrate the usefulness of the new approach on several illustrative examples. +Keywords: generative models, implicit models, likelihood-free inference, normalising flows, +simulation-based inference +1 +Introduction +Statistical inference for complex models can be challenging when the likelihood function +is infeasible to evaluate many times. However, if the model is computationally inexpen- +sive to simulate given parameter values, it is possible to perform approximate parameter +estimation by so-called simulation-based inference (SBI) techniques (e.g. Cranmer et al. +(2020)). The difficulty of obtaining reliable inferences in the SBI setting is exacerbated +when the model is misspecified (e.g. Frazier et al. (2020)). +1 +arXiv:2301.13368v1 [stat.ME] 31 Jan 2023 + +Statistical approaches for SBI, such as approximate Bayesian computation (ABC, Sisson +et al. (2018)) and Bayesian synthetic likelihood (BSL, Price et al. (2018)) have been well +studied, both empirically (e.g. Drovandi and Frazier (2022)) and theoretically (e.g. Li and +Fearnhead (2018), Frazier et al. (2018), Frazier et al. (2022)). These approaches often base +inference on a summarisation of the data to manage computational costs. ABC aims to +minimise the distance between observed and simulated summaries, whereas BSL constructs +a Gaussian approximation of the model summary to form an approximate likelihood. In +the case of model misspecification, there may be additional motivation to replace the entire +dataset with summaries, as the resulting model can then be trained to capture the broad +features of the data that may be of most interest; see, e.g., Lewis et al. (2021) for further +discussion. In this paper, the type of misspecification we are interested in is when the +model is not able to recover the observed summary statistic as the sample size diverges. +This form of misspecification is referred to as incompatibility in Marin et al. (2014). +The behaviour of ABC and BSL under incompatibility is now well understood. Frazier +et al. (2020) show that under various assumptions, ABC is capable of concentrating onto +the pseudo-true parameter value, which in the SBI context is the value that minimises +some distance between the large sample limit of the observed and simulated summaries. +However, the concentration is not Gaussian and credible intervals do not have the correct +frequentist coverage. BSL on the other hand can exhibit unexpected behaviour under +misspecification (Frazier et al., 2021). +For example, it is possible to obtain Gaussian +concentration onto the pseudo-true parameter, but it is also possible to obtain a multi- +modal posterior that does not concentrate onto a singleton. Unfortunately, the behaviour +for a given problem is not known a priori. +Given the undesirable properties of BSL under misspecification, Frazier and Drovandi +(2021) propose methods to simultaneously identify which statistics are incompatible and +make inferences robust. The approach of Frazier and Drovandi (2021) is a model expansion +that introduces auxiliary variables, one per summary statistic, whose purpose is to either +shift the means or inflate the variances in the Gaussian approximation so that the extended +model is compatible, i.e. to soak up the misspecification. +Although ABC is, in a certain sense, robust to misspecification, and BSL has been extended +to handle incompatibility, they both remain inefficient in terms of the number of model +simulations required. Most algorithms for ABC and BSL are wasteful in the sense they use +a relatively large number of model simulations that are associated with rejected parameter +proposals (for some exceptions to this, see Jasra et al. (2019); Levi and Craiu (2022); +Warne et al. (2018, 2022)). This has motivated the development of methods in machine +learning that utilise all model simulations to learn either the likelihood (e.g. Papamakarios +et al. (2019)), posterior (e.g. Greenberg et al. (2019)) or likelihood ratio (e.g. Thomas et al. +(2022)). Since these objects are learned as functions of the parameter, subsequent posterior +inference does not require further model simulation. +However, the machine learning approaches, such as sequential neural likelihood (SNL) and +sequential neural posterior (SNP) have been shown to exhibit poor performance under +model misspecification (e.g. Bon et al. (2022); Cannon et al. (2022); Schmitt et al. (2021); +Ward et al. (2022)). Thus there is a critical need to develop these neural approaches so +they are robust to model misspecification. Ward et al. (2022) develop a method, which +shares similarities to the mean adjustment approach developed for BSL, to make neural +posterior estimation robust to model misspecification. Cannon et al. (2022) develop several +neural SBI robust methods by incorporating machine learning methods that are known to +better handle out-of-distribution (OOD) data. Cranmer et al. (2020) advise to incorporate +2 + +additional noise directly into the simulator if model misspecification is suspected. +In this paper we develop a robust version of SNL, again inspired by the mean adjustment +approach for BSL. Unlike Ward et al. (2022) who consider neural posterior estimation, we +consider neural likelihood estimation, which is useful for problems where the likelihood is +easier to emulate compared to the posterior. Further, ours is the first sequential neural +approach that simultaneously detects and corrects for model misspecification. +2 +Background +Let y = (y1, . . . , yn)⊤ denote the observed data and define P (n) +0 +as the true distribution +of y. The observed data is assumed to be generated from a class of parametric models +{P (n) +θ +: θ ∈ Θ ⊂ Rdθ} for which the likelihood function is intractable, but from which we +can easily simulate pseudo-data x for any θ ∈ Θ where θ is dθ dimensional. Let Π denote +the prior measure for θ and π(θ) its density. The posterior density of interest is given by +π(θ | y) ∝ g(y | θ)π(θ), +where g(y | θ) is the likelihood function. +2.1 +Statistical Approaches for SBI +Since we assume that the likelihood is computationally intractable, we conduct inference +using approximate Bayesian methods. Statistical approaches to SBI aim to search for +values of θ that produce pseudo-data x which is “close enough” to y, and then retain these +values to build an approximation to the posterior. To ensure the problem is computa- +tionally practical, the comparison is generally carried out using summaries of the data. +Moreover, under model misspecification, there may be further motivation to conduct in- +ference based on summaries, to attempt to capture the key features of the data. +Let +S : Rn → Rd, d ≥ dθ, denote the vector summary statistic mapping used in the analysis. +Two prominent statistical approaches for SBI are ABC and BSL. ABC approximates the +likelihood via the following: +gϵ(S(y) | θ) = +� +Rd Kϵ(ρ{S(y), S(x)})gn(S(x) | θ)dx, +where ρ{S(y), S(x)} measures the discrepancy between observed and simulated summaries +and Kϵ(·) is a kernel that allocates higher weight to smaller ρ. The bandwidth of the +kernel, ϵ, is often referred to as the tolerance in the ABC literature. The above integral is +intractable, but can be estimated unbiasedly by drawing m mock datasets x1, . . . , xm ∼ +P (n) +θ +and computing +ˆgϵ(S(y) | θ) = 1 +m +m +� +i=1 +Kϵ(ρ{S(y), S(xi)}). +It is common to set m = 1 and choose the indicator kernel function, Kϵ(ρ{S(y), S(x)}) = +I(ρ{S(y), S(x)} ≤ ϵ). Using arguments from the exact-approximate literature (Andrieu +and Roberts, 2009), unbiasedly estimating the ABC likelihood leads to a Bayesian algo- +rithm that samples from the approximate posterior proportional to gϵ(S(y) | θ)π(θ). +3 + +As is evident from the above integral estimator, ABC non-parametrically estimates the +summary statistic likelihood. In contrast, BSL uses a parametric estimator. The most +common BSL approach approximates gn(· | θ) using a Gaussian: +gA(S(y) | θ) = N (S(y); µ(θ), Σ(θ)) , +where µ(θ) = E[S(x)|θ] and Σ(θ) = Var(S(x)|θ) denote the mean and variance of the model +summary statistic at θ. In almost all practical cases µ(θ) and Σ(θ) are unknown, but we +can replace these quantities with those estimated from m independent model simulations, +using for example the sample mean and variance: +µm(θ) = 1 +m +m +� +i=1 +S(xi), +Σm(θ) = 1 +m +m +� +i=1 +� +S(xi) − µm(θ) +� � +S(xi) − µm(θ) +�⊤ , +and where each simulated data set xi, i = 1, . . . , m, is generated iid from P (n) +θ +. +The +synthetic likelihood is then approximated as +ˆgA(S(y) | θ) = N (S(y); µm(θ), Σm(θ)) . +Unlike ABC, ˆgA(S(y) | θ) is not an unbiased estimator of gA(S(y) | θ). +Frazier et al. +(2022) demonstrate that if the summary statistics are sub-Gaussian, then the choice of +m is immaterial so long as m diverges as n diverges. The insensitivity to m is supported +empirically in Price et al. (2018), provided that m is chosen large enough so that the +plug-in synthetic likelihood estimator has a small enough variance to ensure that MCMC +mixing is not adversely affected. +2.2 +SBI and Model Misspecification +The usual notion of model misspecification is not meaningful in the SBI context, i.e., no +value of θ ∈ Θ such that P (n) +θ += P (n) +0 +, since even if the model is incorrect, it is still possible +that P (n) +θ +can generate summary statistics that match the observed statistic (Frazier et al., +2020). Define b(θ) = E[S(x) | θ] and b0 = E[S(y)] as the expected value of the summary +statistic with respect to the probability measures P (n) +θ +and P (n) +0 +, respectively. That is, the +expectations are with respect to the model conditioned on θ and the true data generating +process, respectively. +The meaningful notion of misspecification in the SBI context is +when there is no θ ∈ Θ such that b(θ) = b0, i.e. there is no parameter value such that the +expected simulated and observed summaries match. +In the context of ABC, we say that the model is misspecified if +ϵ∗ = inf +θ∈Θ ρ(b(θ), b0) > 0, +for some metric ρ, and the corresponding pseudo-true parameter is defined as θ∗ = +arg infθ∈Θ ρ(b(θ), b0). Frazier et al. (2020) show, under various conditions, the ABC pos- +terior concentrates onto θ∗ for large sample sizes, and thus ABC does possess an inherent +robustness to model misspecification. However, Frazier et al. (2020) also show that the +asymptotic shape of the ABC posterior is non-Gaussian and credible intervals do not pos- +4 + +sess valid frequentist coverage; i.e., confidence sets do not have the correct level under +P (n) +0 +. +In the context of BSL, Frazier et al. (2021) show that when the model is incompatible, +i.e. b(θ) ̸= b0 ∀θ ∈ Θ, the Kullback-Leibler divergence between the true data generating +distribution and the Gaussian distribution associated with the synthetic likelihood diverges +as n diverges. In BSL, we say that the model is incompatible if +lim +n→∞ inf +θ∈Θ {b(θ) − b0}⊤ {nΣ(θ)}−1 {b(θ) − b0} > 0. +Define Mn(θ) = n−1∂ log gA (S | θ) /∂θ. The behaviour of BSL under misspecification is +dependent on the number of roots of Mn(θ) = 0. If there is a single solution, and under +various assumptions, the BSL posterior will concentrate onto the pseudo-true parameter +θ∗ and its asymptotic shape is Gaussian, and the BSL posterior mean satisfies a Bernstein +von-Mises result. However, if there are multiple solutions to Mn(θ) = 0, then the BSL +posterior will asymptotically exhibit multiple modes that do not concentrate on θ∗. The +number of solutions to Mn(θ) = 0 for a given problem is not known a priori and is very +difficult to explore. +In addition to the theoretical issues suffered by BSL under misspecification, there is also +computational issues. Frazier and Drovandi (2021) identify that, under incompatibility, +since the observed summary lies in the tail of the estimated synthetic likelihood for any +value of θ, the Monte Carlo estimate of the likelihood suffers from high variance. Con- +sequently, a very large value of m is required to allow the MCMC chain to mix and not +become stuck, which is computationally burdensome. +A solution to the BSL incompatibility problem is provided in Frazier and Drovandi (2021). +The solution involves expanding the model to include an auxiliary parameter, Γ ∈ Rd +such that Γ = (γ1, . . . , γd)⊤, which has the same dimension as the summary statistic. +The approach of Frazier and Drovandi (2021) then either adjusts the mean or inflates +the variance of the synthetic likelihood so that the observed summary does not lie so +far in the tails of the expanded model. The expanded model is overparameterised since +dim((θ, Γ)⊤) = d + dθ, which is greater than the dimension of the summary statistic, d. +To regularise the model, Frazier and Drovandi (2021) impose a prior distribution on Γ +that favours compatibility. However, the prior for each component of Γ has a heavy tail so +that it can “soak up” the misspecification for a certain subset of the summary statistics. +By doing so, the method is able to identify the statistics that the model is not compatible +with, and at the same time, mitigate the influence of the incompatible statistics on the +inference. Frazier and Drovandi (2021) show that under compatibility, the posterior for Γ +is the same as its prior, so that incompatibility can be detected by departures from the +prior. +Here we provide more detail on the mean adjustment method of Frazier and Drovandi +(2021) since we adopt a similar approach within our robust SNL method. +The mean +adjusted (estimated) synthetic likelihood is denoted +N (S; µm(θ) + σm(θ) ◦ Γ, Σm(θ)) , +where σm(θ) is the vector of estimated standard deviations of the model summary statis- +tics, and ◦ denotes the Hadamard (element-by-element) product. The role of σm(θ) is to +ensure that we can treat each component of Γ as the number of standard deviations (ei- +ther positive or negative) that we are shifting the corresponding model summary statistic. +5 + +Frazier and Drovandi (2021) suggest using a prior for which θ and Γ are independent, with +the prior density for Γ being +p(Γ) = +d +� +j=1 +1 +2λ exp +� +−|γj| +λ +� +. +The Laplace prior above with scale λ for each γj is chosen because it is peaked at zero, +but with a moderately heavy tail. Frazier and Drovandi (2021) develop a component-wise +MCMC algorithm that iteratively updates via the conditionals θ|S, Γ and Γ|S, θ. +The +update for Γ holds the m model simulations fixed and uses a slice sampler so that the +acceptance rate is one and does not requiring tuning a proposal distribution. Frazier and +Drovandi (2021) find empirically that sampling over the joint space (θ, Γ)⊤ does not slow +down mixing on the θ-marginal space. On the contrary, in the case of misspecification, the +mixing is substantially improved as the observed value of the summaries no longer falls in +the tail of the Gaussian distribution. +Although ABC has a natural robustness to misspecification and BSL has been extended to +accommodate incompatibility, both methods reject a large number of model simulations, +and can thus be highly computationally intensive when simulating the model is not cheap. +As described in the introduction, neural methods in the machine learning community +have been developed that exploit all the model simulations to build a surrogate model +of the posterior, likelihood or likelihood ratio. Below we describe one of these methods, +sequential neural likelihood (SNL), and show how it can be extended to accommodate +model misspecification. +3 +Robust Sequential Neural Likelihood +In this section, we propose an approach that extends SNL using a similar method to the +mean adjustment approach in Frazier and Drovandi (2021) so that it is robust to model +misspecification. +3.1 +Sequential Neural Likelihood +SNL belongs to the class of SBI methods that use a neural conditional density estimator +(NCDE). A NCDE is a specific class of neural network, qφ, parameterised by φ, that learns +a conditional probability density from a set of datapoint pairs. This is attractive for SBI +as we have access to pairs of (θ, x), but do not have a tractable conditional probability +density, in either direction. Hence, the idea is to train qφ on D = {θi, xi}m +i=1 and use it as +a surrogate for the unavailable density of interest. NCDEs have been used as a surrogate +density for the likelihood (Papamakarios et al., 2019) and posterior (Papamakarios and +Murray, 2016; Greenberg et al., 2019). Throughout this section we will mainly consider +approaches that build a surrogate of the intractable likelihood function, qφ(S(x) | θ), using +a normalising flow as the NCDE. +Normalising flows are a useful class of neural networks for density estimation. They convert +a simple base distribution with density π(u), to a complex target distribution with density +π(η), through a sequence of L bijective transformations, T = TL ◦ · · · ◦ T1. The density of +6 + +η = T −1(u), η ∈ Rd, where u ∼ π(u) is +π(η) = π(u)| det JT (u)|−1, +(1) +where JT is the Jacobian of T. Normalising flows are also useful for data generation, +although this has been less important for SBI methods. We only consider autoregressive +flows here, but there are many recently developed alternatives, as discussed in Papamakar- +ios et al. (2021). +Autoregressive flows are defined by a conditioner function and a transformer function. +The transformer, v′ +i = τ(vi; hi), is an invertible function parameterised by hi that maps vi +to v′ +i for i = 1, . . . , d. The conditioner, hi = ci(v